You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fine-tune open-source Language Models (LLMs) on E-commerce data, leveraging Amazon's sales data. Showcase a tailored solution for enhanced language understanding and generation with a focus on custom E-commerce datasets.
NeuroDoc is a powerful AI-based offline document summarization tool that leverages OCR and NLP to intelligently analyze PDFs and generate structured summaries. Built using Flask, this tool is designed to run completely offline and supports both text-based and scanned/image-based documents.
This repository demonstrates how to use Hugging Face Transformers for text summarization. We focus on two state-of-the-art models: BART (facebook/bart-large-cnn) T5 (t5-large) Both models are designed for sequence-to-sequence tasks, making them ideal for text summarization.
The project contains code and resources for a sophisticated AI-driven chatbot designed to provide accurate, context-aware responses. It uses RoBerta and BART Transformers and advance NLP techniques. The chatbot is capable of handling a wide range of domains such as healthcare , finance , etc..
This repository explores enhancing dialogue summarization with commonsense knowledge through the SICK framework, evaluating models on dialogue datasets to assess commonsense's impact on summarization quality.
A Software Tools & Methods Project including prompt engineering techniques for leveraging pre-trained models on Hugging Face. Concludes design, evaluation, and refining prompts for specific use cases, ensuring optimal model performance.
A Software Tools & Methods Project including prompt engineering techniques for leveraging pre-trained models on Hugging Face. Concludes design, evaluation, and refining prompts for specific use cases, ensuring optimal model performance.
Understanding Language with Transformers. Implementing and improving BERT and BART models for multiple NLP tasks like sentiment analysis, paraphrase detection, semantic similarity, and generation.