In today’s AI-driven world, applied economists can no longer pretend these technologies don’t exist and continue working as if it’s 30 years ago—just as economists in the 1980s had to adapt to computers. We use tools like ChatGPT, Claude, and DeepSeek to check grammar, assist with coding, and learn new techniques. However, proficiency in using these tools remains a mystery for many.
Open-Source vs. Closed-Source Models
When using a chatbot like ChatGPT, the first question to ask is: Is it open-source and running locally, or is it a closed-source model running on a remote server? Closed-source models, or any models on server may collect your input for training data—or worse, sell it for profit. The safest approach is to only share sensitive information with a local model running on your hardware.
Instruction deleted due to copyright requirement and sensible information
Hardware Requirements
I have found the Qwen 2.5 32B model to be sufficient for everyday tasks, while models with fewer than 10B parameters work great for simpler tasks. Your local PC, whether running Windows or Linux, is only used for inference since the model has already been trained by the author.
The 32B model requires a total of 22.1GB of VRAM on my GPU setup (4×RTX 3060 12GB on PCIe 3.0 x1). The inference speed is approximately 8 tokens/second, which is good enough for basic chats and interactions. Models with fewer than 10B parameters can run on almost any GPU or even a CPU, provided it has sufficient memory (more than 8GB).
Instructions for loading LLM locally:
<details removed>
Techniques for Using Chatbots
<details removed>
Fine-Tuning the Model
<details removed>
API Usage
<details removed>
Python Basics
<details removed>
<Copyright issue, waiting for update>