.....     .....     ..... 

5 STAR AI  FREE Youtubeshorts Title Generator 

Launch a Private AI Document Chatbot in 10 minutes! | Open Source

     .....     .....     .....

Chapters 0:00 What is AnythingLLM? 0:57 Useanything.com Hosted solution 2:07 Render.com Introduction 2:51 Selecting the open source Github Repo 3:10 Configuring the service 3:56 Consulting the docs! 4:22 Selecting your instance size $$ 5:10 Setting up storage 7:15 TURN OFF AUTO DEPLOY!! 7:34 Vector Database selections 7:40 LLM Selections 8:10 Done! 8:55 Checking out our AnythingLLM instance 9:26 Total cost and Summary

     .....     .....     .....

5 STAR AI FREE TikTok Titles Generator 

ENTIRE TRANSCRIPT 

hey there my name is Timothy karenbat founder of mimplex labs and today I wanted to make a pretty short video on how to host your own private instance of anything llm now anything llm is a private open source chat with your documents software that contains everything that you're going to need to either use this yourself or with a team now that being said there's a lot of features in here and I'm not going to go over each one but this is pretty simple to run locally but sometimes you're going to want to host stuff on the internet to where it's publicly accessible into where people can also collaborate and you can upload docs and everyone can contribute it's all just a beautiful beautiful thing that anything llm can be when you add it into your business or your consultancy we add it into your whole stack pretty much gives you superpowers now for those of you who are familiar with this project you've probably already run it locally but now you're looking for a way to put it on the web well I have two things to share first is that we actually have a website for this now I'll link it in the description it is useanything.com uh that is mostly because some person uh bought anything llm and then domain squatted it and is now trying to charge me like thirty thousand dollars for it um no uh if they're watched this video anyway uh aside from that we actually have a way to run this for you these are isolated instances so if you upload anything your storage is not shared with everybody else's it's not like one giant S3 bucket or Google Drive it's all isolated private instances and you get your own sub domain and you can actually get a free three-day trial now if you wanted to do this for yourself in a way that is definitely more expensive but just something that I guess you can fully manage this is managed and we updated and always push the latest changes for you if you want to do it a different way we always make that available because we are an open source project you can install this yourself on a server if you own it or the easiest option is actually to just use a service like render if you don't know what render is in summary it's basically AWS but with a way better interface and in just a few clicks I'm going to show you how to launch in anything llm instance that you can then access if you want to do it this way because why not at the end of the day our objective at midplex Labs is to make great open source tools that allow you or your business to use these new Cutting Edge Technologies for your business however you do that is totally up to you and I want to offer all avenues available so to get started you'll need a render account and you'll need to attach some billing to it because this you know nothing in life is free once you have your account set up on render all you would need to do is go to new and go to web service and it's automatically going to ask you to connect a repository or link a public one what we're going to do is we're going to go link a public one so we should be able to just paste that in there and click continue now this is going to be the name of our service they don't tell you this but this is actually what the subdomain will look like you can also add a custom root domain or subdomain using this method as well but we're just going to call this sample anything llm service now the next thing is you're going to want to choose the render deployment Branch this is a special branch that we allocated just for render and it is always in sync with master so whatever changes that we push like for example yesterday we made live the full developer API that will be live in this Branch as well and then it'll be up to you to redeploy it push the changes all of that stuff but since you own this this is your system now the root directory we're going to leave empty uh and also if you wanted to follow along if you're a more uh you know visual learner you can actually go to the render deployment branch and in Cloud deployments we have a render folder and then this readme documentation will actually tell you exactly how to do it so root directory as I said is empty and we can close this tab now and for anything llm because there is frankly a lot going on with this service you're going to need at least two gigabytes of RAM this will keep your instance fast and perform it without having to charge you and ton of money so this is 25 a month plus storage you're typically probably going to be looking at around 30 something dollars a month more realistically but that's a low cost to pay for the amount of power that you're going to unlock and obviously we're going to choose Docker because that's actually what we're running now Advanced is where we're going to need to actually kind of get in the weeds a little bit to make sure that we set this up right the first time so we're going to need two environment variables one called Port 3000 which is going to be equal to the value of 3001. this is just going to make the deployment process a lot faster for us on render and then we're going to need a storage directory component and we're going to have that value be equal to slash storage you're wondering what the hell is that where is that coming from I'm going to get to that in a little bit when we add a disk so render much like other ephemeral services do not offer persistent storage built in so what that means to you is that if you have your anything llm instance running and then for whatever reason you decide to restart it or you pull in the latest change all of your settings will be gone that's obviously not what you want because one of the benefits of anything LOM is the vector cache so you don't have to keep paying to re-embed stuff in different workspaces this solves that so what we're going to do is we can just call this storage and then the mount path will be that slash storage path that we set in EnV the storage directory environment variable for these two values if you wanted to change them must be the same but that's all and then for size I always recommend at least the smallest size you can at least for this one gigabyte is enough to get you started to see if you like using anything LOM like this or if you'd prefer using a local or R Cloud hosted version for health check health check is pretty uh you don't have to use it but the proper value is slash API slash ping this will basically be something that render can ping every now and then to determine is your instance still online and I'll send you an email if for whatever reason it's offline for the build context directory we're actually just going to put a period so this will be relative to the root so that means our build context is the root directory you'll need to really worry about that the build path will be this special Docker file which is just render dot dockerfile it's just a slightly modified version of the original Docker file that allows this to run on render without you having to debug it or anything like that same Docker command Auto deploy I recommend turning it off and then manually updating as new updates come about the reason I say this is because if you are in the middle of maybe embedding a huge document and your instance restarts you typically don't want that to happen because you'll wind up with a Half Baked document or even worse a corrupted Vector database anything llm comes with a built-in Vector database you don't need to go and spin up another Vector database although you are totally welcome to do so and it'll work with it as far as llms go open Ai and Azure open AI are the two supported ones we'd like to support other llms like anthropics quad V2 but they have yet to email me back I don't know if they just don't like me but I've emailed them twice and they still won't give me access so if you work for anthropic or you know someone that does tell them that I am very interested in integrating their llm into this and then once we're done that's actually the entire setup process we are done we're going to go to create web service and now what's going to happen is eventually we will have a running anything llm instance that no matter how many times we restart it all of our data will be persisted and it will be at this domain now if you aren't familiar with render but you're still excited to do this if you go to settings and then you go to custom domains this is how you add a custom domain that being said not going to get into how to do that because it's just frankly out of the scope of this video but everything else should be set up for you and this process can take a couple minutes to boot up and get running and so I'm gonna restart the video Once the instance is live just to show you that it's all there all right and here we are only a couple minutes later and what we can see is that the service is now live uh so it may take a couple seconds after the fact of boot but if we go to The Domain here we are we are treated with a beautiful fresh instance of anything llm where we can create new workspaces and just to be sure that this for example has the latest changes there should be a API documentation endpoint at slash API slash docs and there it is so looks like we are all good to go this would be your instance and you can do what you want with it just be careful to make sure that you don't exceed the storage capacity of the disk if you set it to to something low at first all in this will cost you probably about 25 bucks a month for sure plus 25 cents per gigabyte of storage regardless of if it's being used or not pretty simple way to set up anything llm and hopefully you use this or consider our hosted option or even just run it locally and give us some great feedback because that would be immensely helpful for us hopefully this video was helpful hope you enjoy your instance of anything llm thank you