Skip to content

Releases: NVIDIA/GenerativeAIExamples

v0.8.0

21 Aug 03:11
4e86d75
Compare
Choose a tag to compare

This release completely refactors the directory structure of the repository for a more seamless and intuitive developer journey. It also adds support to deploy the latest accelerated embedding and reranking models across the cloud, data center, and workstation using NVIDIA NeMo Retriever NIM microservices.

Added

Changed

  • Major restructuring and reorganisation of the assets within the repository
    • Top level experimental directory has been renamed as community.
    • Top level RetrievalAugmentedGeneration directory has been renamed as just RAG.
    • The Docker Compose files inside top level deploy directory has been migrated to example-specific directories under RAG/examples. The vector database and on-prem NIM microservices deployment files are under RAG/examples/local_deploy.
    • Top level models has been renamed to finetuning.
    • Top level notebooks directory has been moved to under RAG/notebooks and has been organised framework wise.
    • Top level tools directory has been migrated to RAG/tools.
    • Top level integrations directory has been moved into RAG/src.
    • RetreivalAugmentedGeneration/common is now residing under RAG/src/chain_server.
    • RetreivalAugmentedGeneration/frontend is now residing under RAG/src/rag_playground/default.
    • 5 mins RAG No GPU example under top level examples directory, is now under community.

Deprecated

v0.7.0

18 Jun 15:52
b43e8b0
Compare
Choose a tag to compare

This release switches all examples to use cloud hosted GPU accelerated LLM and embedding models from Nvidia API Catalog as default. It also deprecates support to deploy on-prem models using NeMo Inference Framework Container and adds support to deploy accelerated generative AI models across the cloud, data center, and workstation using latest Nvidia NIM-LLM.

Added

Changed

  • All examples now use llama3 models from Nvidia API Catalog as default. Summary of updated examples and the model it uses is available here.
  • Switched default embedding model of all examples to Snowflake arctic-embed-I model
  • Added more verbose logs and support to configure log level for chain server using LOG_LEVEL enviroment variable.
  • Bumped up version of langchain-nvidia-ai-endpoints, sentence-transformers package and milvus containers
  • Updated base containers to use ubuntu 22.04 image nvcr.io/nvidia/base/ubuntu:22.04_20240212
  • Added llama-index-readers-file as dependency to avoid runtime package installation within chain server.

Deprecated

v0.6.0

10 May 17:19
e711143
Compare
Choose a tag to compare

This release adds ability to switch between API Catalog models and on-prem models using NIM-LLM and adds documentation on how to build an RAG application from scratch. It also releases a containerized end to end RAG evaluation application integrated with RAG chain-server APIs.

Added

Changed

  • Renamed example csv_rag to structured_data_rag
  • Model Engine name update
    • nv-ai-foundation and nv-api-catalog llm engine are renamed to nvidia-ai-endpoints
    • nv-ai-foundation embedding engine is renamed to nvidia-ai-endpoints
  • Embedding model update
    • developer_rag example uses UAE-Large-V1 embedding model.
    • Using ai-embed-qa-4 for api catalog examples instead of nvolveqa_40k as embedding model
  • Ingested data now persists across multiple sessions.
  • Updated langchain-nvidia-endpoints to version 0.0.11, enabling support for models like llama3.
  • File extension based validation to throw error for unsupported files.
  • The default output token length in the UI has been increased from 250 to 1024 for more comprehensive responses.
  • Stricter chain-server API validation support to enhance API security
  • Updated version of llama-index, pymilvus.
  • Updated pgvector container to pgvector/pgvector:pg16
  • LLM Model Updates

v0.5.0

20 Mar 18:10
6de0008
Compare
Choose a tag to compare

This release adds new dedicated RAG examples showcasing state of the art usecases, switches to the latest API catalog endpoints from NVIDIA and also refactors the API interface of chain-server. This release also improves the developer experience by adding github pages based documentation and streamlining the example deployment flow using dedicated compose files.

Added

Changed

  • Switched from NVIDIA AI Foundation to NVIDIA API Catalog endpoints for accessing cloud hosted LLM models.
  • Refactored API schema of chain-server component to support runtime allocation of llm parameters like temperature, max tokens, chat history etc.
  • Renamed llm-playground service in compose files to rag-playground.
  • Switched base containers for all components to ubuntu instead of pytorch and optimized container build time as well as container size.
  • Deprecated yaml based configuration to avoid confusion, all configurations are now environment variable based.
  • Removed requirement of hardcoding NVIDIA_API_KEY in compose.env file.
  • Upgraded all python dependencies for chain-server and rag-playground services.

Fixed

  • Fixed a bug causing hallucinated answer when retriever fails to return any documents.
  • Fixed some accuracy issues for all the examples.

v0.4.0

22 Feb 20:51
Compare
Choose a tag to compare

This release adds new dedicated notebooks showcasing usage of cloud based NVIDIA AI Foundation models, upgraded milvus container version to enable GPU accelerated vector search and added support for FAISS vector database. Detailed changes are listed below:

Added

  • New dedicated notebooks showcasing usage of cloud based Nvidia AI Foundation based models using Langchain connectors as well as local model deployment using Huggingface.
  • Upgraded milvus container version to enable GPU accelerated vector search.
  • Added support to interact with models behind NeMo Inference Microservices using new model engines nemo-embed and nemo-infer.
  • Added support to provide example specific collection name for vector databases using an environment variable named COLLECTION_NAME.
  • Added faiss as a generic vector database solution behind utils.py.

Changed

  • Upgraded and changed base containers for all components to pytorch 23.12-py3.
  • Added langchain specific vector database connector in utils.py.
  • Changed speech support to use single channel for Riva ASR and TTS.
  • Changed get_llm utility in utils.py to return Langchain wrapper instead of Llmaindex wrappers.

Fixed

  • Fixed a bug causing empty rating in evaluation notebook
  • Fixed document search implementation of query decomposition example.

v0.3.0

22 Jan 16:48
3d29acf
Compare
Choose a tag to compare

This release adds support for PGvector Vector DB, speech-in speech-out support using RIVA and RAG observability tooling. This release also adds a dedicated example for RAG pipeline using only models from NVIDIA AI Foundation and one example demonstrating query decomposition. Detailed changes are listed below:

Added

Changed

  • Upgraded Langchain and llamaindex dependencies for all container.
  • Restructured README files for better intuitiveness.
  • Added provision to plug in multiple examples using a common base class.
  • Changed minio service's port to 9010from 9000 in docker based deployment.
  • Moved evaluation directory from top level to under tools and created a dedicated compose file.
  • Added an experimental directory for plugging in experimental features.
  • Modified notebooks to use TRTLLM and Nvidia AI foundation based connectors from langchain.
  • Changed ai-playground model engine name to nv-ai-foundation in configurations.

Fixed

Release v0.2.0

15 Dec 19:54
f7a520f
Compare
Choose a tag to compare

This release builds on the feedback received and brings many improvements, bugfixes and new features. This release is the first to include Nvidia AI Foundational models support and support for quantized LLM models. Detailed changes are listed below:

What's Added

What's Changed

  • Repository restructing to allow better open source contributions
  • Upgraded dependencies for chain server container
  • Upgraded NeMo Inference Framework container version, no seperate sign up needed now for access.
  • Main README now provides more details.
  • Documentation improvements.
  • Better error handling and reporting mechanism for corner cases.
  • Renamed triton-inference-server container and service to llm-inference-server

What's Fixed

  • #13 of pipeline not able to answer questions unrelated to knowledge base
  • #12 typechecking while uploading PDF files

v0.1.0

16 Nov 19:51
Compare
Choose a tag to compare
Bump postcss and next (#4)

Bumps [postcss](https://github.com/postcss/postcss) to 8.4.31 and updates ancestor dependency [next](https://github.com/vercel/next.js). These dependencies need to be updated together.


Updates `postcss` from 8.4.14 to 8.4.31
- [Release notes](https://github.com/postcss/postcss/releases)
- [Changelog](https://github.com/postcss/postcss/blob/main/CHANGELOG.md)
- [Commits](https://github.com/postcss/postcss/compare/8.4.14...8.4.31)

Updates `next` from 13.4.12 to 13.5.6
- [Release notes](https://github.com/vercel/next.js/releases)
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js)
- [Commits](https://github.com/vercel/next.js/compare/v13.4.12...v13.5.6)

---
updated-dependencies:
- dependency-name: postcss
  dependency-type: indirect
- dependency-name: next
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>