Commit History

Update src/constants/config.py
015d013
verified

LeoNguyen101120 commited on

Update src/utils/clients/llama_cpp_client.py
d1948a3
verified

LeoNguyen101120 commited on

Enhance image generation and configuration management: Integrate environment variable loading for API keys in config.py, update image generation parameters in image_service.py, and refine system prompts to ensure independent processing of image requests. Adjust vector store service to utilize Chroma for improved performance.
4834784

LeoNguyen101120 commited on

Update requirements and refactor client integration: Add extra index URL for PyTorch in requirements.txt, integrate open_ai_client in main.py, and adjust image generation parameters in image_service.py. Refactor llama_cpp_client to improve model loading configuration and enhance error handling in image_pipeline_client.
32efff5

LeoNguyen101120 commited on

Update documentation and refine requirements: Enhance the README with detailed installation instructions, Docker deployment steps, and key dependencies. Update requirements files to clarify optional packages and adjust CUDA-related dependencies. Modify .gitignore to include cache directories and ensure proper resource management in the application.
e3a80c0

LeoNguyen commited on

update
10ec9ff

LeoNguyen commited on

Refactor Dockerfile and requirements: Simplify Python installation by removing unnecessary packages and adjust pip command in Dockerfile. Comment out unused dependencies in requirements_for_server.txt for clarity.
bfc6577

LeoNguyen commited on

Merge branch 'main' of https://github.com/nguyentronghuan101120/ai-assistance-server
d9de609

LeoNguyen commited on

Update Dockerfile and requirements: Switch to NVIDIA CUDA base image, install Python 3.11 and necessary dependencies, and adjust CMD port. Update requirements files to include new dependencies and modify paths for local packages. Refactor main.py and config.py to comment out unused imports and settings for improved clarity.
59c5830

LeoNguyen101120 commited on

Update server requirements and refactor torch configuration: Uncomment necessary dependencies in requirements_for_server.txt for diffusers, accelerate, transformers, and torch. Simplify torch configuration in config.py by removing the dedicated function and directly initializing device settings and model optimization parameters.
d450489

LeoNguyen commited on

Refactor torch configuration and error handling: Move torch import into a dedicated function to improve error handling for missing dependencies. Update device setup logic and model optimization settings to ensure proper initialization and configuration.
41a75e4

LeoNguyen commited on

Refactor image pipeline client and update requirements: Move torch and StableDiffusionPipeline imports inside the load_pipeline function for better error handling. Add ImportError exception to guide users on missing dependencies. Comment out the torch version in requirements_for_server.txt for clarity.
d2849b9

LeoNguyen commited on

Update dependency management and Dockerfile configuration: Modify .gitignore to include local_packages_for_server. Update Dockerfile to create cache directories and adjust pip installation to exclude local packages. Comment out unused dependencies in requirements files for better clarity and maintainability. Refactor main.py to streamline client loading during FastAPI initialization.
96d7be3

LeoNguyen commited on

Enhance client integration and error handling: Introduce open_ai_client and update chat_service to dynamically select the active client for message generation. Refactor llama_cpp_client and transformer_client to include loading status checks. Modify main.py to integrate image_pipeline_client and streamline resource management during FastAPI initialization.
c7dd77a

LeoNguyen101120 commited on

Update dependency management and error handling: Modify .gitignore to include llama.cpp and local_packages_for_win. Update requirements files to enable llama-cpp-python installation and specify additional index URLs for package retrieval. Enhance error handling in main.py during FastAPI startup to ensure exceptions are raised properly.
a4ffc6e

LeoNguyen101120 commited on

Refactor chat_service and llama_cpp_client: Replace transformer_client with llama_cpp_client for message generation and streaming. Enhance llama_cpp_client with improved error handling and tool call extraction. Streamline chat completion process and update function names for clarity.
bc721e3

LeoNguyen101120 commited on

Merge branch 'main' of https://github.com/nguyentronghuan101120/AI
9a7a061

LeoNguyen101120 commited on

Refactor dependencies and client integration: Update requirements.txt to adjust LLM model references and comment out unused dependencies. Modify main.py and chat_service.py to integrate llama_cpp_client for message generation, while ensuring transformer_client is commented out. Enhance llama_cpp_client with loading functionality and error handling for missing dependencies.
d25c49f

LeoNguyen101120 commited on

Update server requirements: Add torch version 2.7.0 and update bitsandbytes wheel path for Linux compatibility in requirements_for_server.txt. Introduce new bitsandbytes wheel file for server use.
9296b9a

LeoNguyen101120 commited on

Update Dockerfile and requirements: Introduce requirements_for_server.txt for streamlined dependency management, remove requirements.local.txt, and adjust Dockerfile to install local packages. Refactor chat_service.py to utilize transformer_client for message generation, and update .gitignore to include local_packages_for_win.
6b4fb1d

LeoNguyen101120 commited on

Update requirements and refactor client imports: Add uvicorn and update dependencies in requirements.txt. Refactor import statements in main.py, chat_service.py, image_service.py, and vector_store_service.py to use new client structure. Introduce new client modules for image and vector store handling, and enhance process_file_service.py with necessary imports for document loading and text splitting.
c2767f1

LeoNguyen101120 commited on

Refactor Dockerfile and update requirements: Uncomment llama-cpp-python installation lines for potential future use, streamline requirements installation, and modify CMD to use uvicorn for running the FastAPI app. Enhance chat service to utilize transformer_client for improved streaming and tool call handling, and introduce a new stream_helper for processing content.
739982d

LeoNguyen101120 commited on

Merge pull request #4 from nguyentronghuan101120/update
c6498b6

@huannt commited on

Implement lifespan context manager in FastAPI initialization: Reintroduce the lifespan context manager in main.py for resource management during app startup and shutdown, ensuring proper handling of exceptions. This change enhances the app's lifecycle management while maintaining commented-out resource loading and clearing functionality.
172f1fd

LeoNguyen101120 commited on

Update configuration and refactor chat handling: Change default port in launch.json, modify main.py to simplify FastAPI initialization by removing the lifespan context manager, update LLM_MODEL_NAME in config.py, and enhance system prompts for clearer tool call instructions. Refactor chat service and client to streamline tool call processing and improve response handling.
7404b4c

LeoNguyen101120 commited on

Refactor chat handling and model integration: Update .env.example to include new API keys, modify main.py to implement a lifespan context manager for resource management, and replace Message class with dictionary structures in chat_request.py and chat_service.py for improved flexibility. Remove unused message and response models to streamline codebase.
2692e0d

LeoNguyen101120 commited on

Enhance configuration and model handling: Update launch.json with IntelliSense comments, improve device selection logic in config.py for better compatibility with Apple Silicon and CUDA, and optimize model loading in transformer_client.py with enhanced settings for quantization and tokenizer performance. Update system prompts for clearer tool call instructions.
c62df12

LeoNguyen101120 commited on

Merge pull request #3 from nguyentronghuan101120/transfomer
2e04cb1

@huannt commited on

Refactor configuration and model handling: Update .gitignore to include bitsandbytes and llama-cpp-python, modify launch.json for Windows compatibility, enhance requirements.txt, and rename constants in config.py for clarity. Update chat_request.py and transformer_client.py to use LLM_MODEL_NAME and improve model initialization with quantization support.
badee5c

LeoNguyen101120 commited on

Update model configurations and refactor chat service: Change default model to 'NousResearch/Hermes-2-Pro-Llama-3-8B' in config.py, enhance Message class to include role in output, and replace llama_cpp_client functions with transformer_client for improved chat generation and streaming capabilities.
54ef0b1

LeoNguyen101120 commited on

Refactor Dockerfile and .dockerignore: Update file copying strategy to include only necessary files and improve ignored patterns for better build efficiency.
e48d33f

LeoNguyen101120 commited on

Update Dockerfile: Add extra index URL for llama-cpp-python installation to improve package retrieval.
e086b97

LeoNguyen101120 commited on

Enhance llama_cpp_client.py: Add message prompt functions for improved chat handling and integrate tokenizer for better message formatting.
d9baaf0

LeoNguyen101120 commited on

Refactor llama_cpp_client.py: Remove GPU layer detection and thread count logic, simplify Llama initialization with default parameters, and enhance code clarity by cleaning up commented sections.
0f41fe4

LeoNguyen101120 commited on

Enhance llama_cpp_client.py: Implement GPU layer detection for macOS and NVIDIA systems, adjust thread count for non-GPU usage, and improve error handling in chat completion creation. Update verbose logging for better debugging.
c8ea414

LeoNguyen101120 commited on

Refactor Dockerfile and configuration paths: Update Dockerfile to create temporary data directories and adjust paths in config.py and llama_cpp_client.py for improved data management and integration. Comment out unused code in image_pipeline.py for clarity.
8885bd8

LeoNguyen101120 commited on

Update Dockerfile: Comment out unnecessary dependencies and streamline Python package installation by installing llama-cpp-python separately and adjusting requirements.txt for improved build efficiency.
9270eee

LeoNguyen101120 commited on

Refactor Dockerfile and configuration paths: Update Dockerfile to create necessary data directories and adjust paths in config.py and vector_store_service.py for improved data management.
d8c66d8

LeoNguyen101120 commited on

Refactor Dockerfile: Remove Hugging Face environment variables and cache directory setup for cleaner configuration.
c846cb8

LeoNguyen101120 commited on

Update .gitignore and requirements files; refactor config and service files for improved structure and maintainability.
2a2d65b

LeoNguyen101120 commited on

Enhance Dockerfile and vector_store_service.py: Add Hugging Face cache folder in Dockerfile; improve embeddings function initialization and formatting in vector_store_service.py.
b71088e

LeoNguyen101120 commited on

Remove unused imports from chat_service.py to clean up the code.
34bdcc9

LeoNguyen101120 commited on

Refactor configuration and messaging models; enhance chat service and routes for tool call handling; implement llama_cpp client for improved model integration; remove deprecated OpenAI client; update tool call response structure.
607f08c

LeoNguyen101120 commited on

Update requirements.txt to include llama-cpp-python dependency; change default port in launch.json from 8000 to 8080; add VSCode settings for Python type checking; modify welcome message in main.py; enhance configuration in config.py with new model and file name; implement Message and ChatResponse models for structured messaging; refactor chat_request and chat_service to utilize new message structure; streamline chat response handling; and update client.py for improved OpenAI API integration.
b3e45e6

LeoNguyen101120 commited on

Refactor Dockerfile to copy all application files into the container; add .gitignore for IDE files; update main.py and image_service.py to use OUTPUT_DIR from config; streamline file handling in process_file_service.py; remove unused OpenAI client initialization; enhance vector_store_service.py with configuration constants for improved maintainability.
16f0db6

LeoNguyen101120 commited on

Update vector_store_service.py to change PersistentClient data path from './data' to '/data' for improved file accessibility.
60924bb

LeoNguyen101120 commited on

Add Dockerfile for containerization of the AI application; include environment setup, dependencies, and FastAPI configuration. Create comprehensive documentation in readme.github.md detailing features, project structure, API endpoints, and development instructions. Replace existing README.md with a simplified version for Hugging Face Spaces deployment.
6566dd8

LeoNguyen101120 commited on

Refactor README to use templated configuration values for title, emoji, colors, and SDK; add sdk_version and app_file entries for improved setup clarity.
211aa4f

LeoNguyen101120 commited on

Add initial documentation for AI Project in github.md, outlining features, project structure, API endpoints, tool-based AI capabilities, error handling, and development setup instructions.
d140d20

LeoNguyen101120 commited on

Update README to reflect changes in title casing, add short description, application port, and relevant tags for improved clarity and organization.
0aae23d

LeoNguyen101120 commited on