Chatbot app to chat with any source of data (doc, url, ...) leveraging LLMs, LangChain, and Gradio. Current version has the following features:
LLM: llama3
Data source: "folder", "csv", "doc", "docx", "epub", "html", "md", "pdf", "ppt", "pptx", "txt", "ipynb", "py", and "url".
This app has been inspired by a few DeepLearningAI courses about LangChain.
A multi-agent system app using open source LLMs of Ollama, CrewAI, LangChain, and Gradio to simulate brainstorming part of a startup.
The multi-agent system simulates the environment of a startup composed of one idea generator and three main staffs: senior technical staff, senior product manager, and senior business intelligence analyst. The idea generator comes up with three ideas relevant to the topic of your choice and the three main members of the startup each one does their portion of works.
This app has been inspired by a DeepLearningAI course about CrewAI.
A Dungeon game, powered by one of LLMs provided by Together API. In the notebook, there is a simple Dungeon game built using Together API and Gradio simulating a fantasy world comprised of kingdoms, towns, characters, and inventories.
This app has been inspired by a DeepLearningAI course about Dungeon and Together frameworks.
Code: https://github.com/aslansd/dungeon_game_using_together_api
Modified Ollama Deep Researcher
Modified Ollama Deep Researcher is an extension of LangChain’s Ollama Deep Researcher, a local web research assistant built upon the multi-agent framework of LangGraph, that uses any LLM hosted by Ollama. Give it a topic and it will generate a web search query, gather web search results (via seven different search APIs such as DuckDuckGo by default), summarise the results of web search, reflect on the summary to examine knowledge gaps, generate a new search query to address the gaps, search, and improve the summary for a user-defined number of cycles. It will provide the user a final markdown summary with all sources used.
This app has been inspired by two LangChain Academy courses about LangGraph and LangSmith frameworks.
Code: https://github.com/aslansd/ollama-deep-researcher-modified
A Multi-Modal RAG Application Built with Streamlit Using a Lightweight Multi-Modal Ollama Model
Retrieval-Augmented Generation (RAG) is an approach that combines information retrieval with large language models. It first retrieves relevant documents from a knowledge base, then uses the LLM to generate responses grounded in that external context. This improves accuracy, factuality, and domain adaptability compared to generation alone. Here, I developed a simple multi-modal RAG application (both text and image documents) built with Streamlit and powered by a lightweight multi-modal Ollama model. Place your text and image documents in the data/documents/ folder, then build a vector store from them, and finally run the app.
This app has been inspired by a few courses of DeepLearningAI.
A Streamlit App for open_deep_research Framework of LangChain/LangGraph.
The Deep Research framework leverages large language models (LLMs) to automate and streamline research workflows. It integrates document ingestion, retrieval, and multi-modal analysis, enabling context-aware summarization, question answering, and insight extraction from diverse data sources. Here, I adapted LangChain’s Open Deep Research (LangGraph) into a Streamlit app for advanced research workflows.
This app has been inspired by a few courses of DeepLearningAI and LangChain Academy.
Code: https://github.com/aslansd/open_deep_research_web & https://rneaovknvzddyykhkeu2et.streamlit.app/
A React Native plus Expo Go App for open_deep_research Framework of LangChain/LangGraph
The Deep Research framework leverages large language models (LLMs) to automate and streamline research workflows. It integrates document ingestion, retrieval, and multi-modal analysis, enabling context-aware summarization, question answering, and insight extraction from diverse data sources. Here, I adapted LangChain’s Open Deep Research (LangGraph) into a React Native plus Expo Go app for advanced research workflows.
This app has been inspired by a few courses of DeepLearningAI and LangChain Academy.
Code: https://github.com/aslansd/open_deep_reasearch_mobile
https://expo.dev/accounts/asataryd/projects/open-deep-research-mobile
A Multi-agent Deep Research Assistant Built upon Deep Agent Framework of LangChain/LangGraph
The Multi-Agent Deep Research Assistant leverages LangGraph’s Deep Agents to orchestrate long-horizon, multi-step research workflows. Unlike shallow agents that simply call tools in a loop, Deep Agents integrate four critical components: (1) planning tool, (2) sub-agent delegation, (3) context offloading to temporary file systems, and (4) carefully engineered prompts. This architecture enables robust handling of complex research pipelines involving dozens of tool calls, coordination between specialised agents, and the synthesis of final outputs (e.g., reports, charts, and explanations). By combining LangChain, LangGraph, OpenAI, and Tavily, the app provides an extensible framework for automated deep research that is transparent, traceable, and reproducible, as shown by its step-by-step workflow visualisation, execution traces, and generated research artefacts.
This app has been inspired by two courses of DeepLearningAI and LangChain Academy.
Code: https://github.com/aslansd/deepagent_researcher_analyse_visualise
Future Projects:
1) Developing open source tools based on large language models. (short-term)
2) Building and training a biologically-inspired large language model from scratch. (long-term)