Python LLM Application
👉 Why Ollama & DeepSeek are perfect for Beginners ?
👉 Master LLM app development in Python. Learn to integrate LLM models.
Table of Contents
- Table of Contents
- Getting Started
- Configure LLM API
- Imports \& Resources
- Load API Keys
- Build the Ollama Client
- Test the Connection
- Key points
Getting Started
We’ve covered the concepts and visualizations—now it’s time to build! This hands-on guide walks you through developing your very first application to integrateLarge Language Models (LLMs) using Ollama within a Python environment.
- Critically, we will leverage the familiar
OpenAIclient library to maintain a consistent and efficient interface.
Configure LLM API
Create/update the .env in your project root directory and add the API URL and the API Token required to connect to the LLM.
# Dummy API Key for local Ollama models
OPENAI_API_KEY="sk-proj-dummy-key"
LLM_BASE_URL="http://localhost:11434/v1"
Imports & Resources
We start by setting up the application environment and importing the necessary libraries.
# Import utilities for environment setup
import os
from dotenv import load_dotenv
# Import the OpenAI client library
from openai import OpenAI
Load API Keys
To maintain compatibility with the OpenAI library, an OPENAI_API_KEY environment variable must be defined, even when connecting to a local Ollama server.
Note: If you are connecting to local Ollama Models, set a dummy value for
OPENAI_API_KEY, such assk-proj-dummy-key.
The below code loads the variables from.envfile from project root directory, create one if missing.
# Load environment variables from .env file
load_dotenv(override=True)
api_key = os.getenv('OPENAI_API_KEY')
# Basic key validation logic
if not api_key:
print("No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!")
elif not api_key.startswith("sk-proj-"):
print("An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook")
elif api_key.strip() != api_key:
print("An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook")
else:
print("API key found and looks good so far!")
Output Stream:
API key found and looks good so far!
Build the Ollama Client
To connect to a local LLM running via Ollama, we initialize the standard OpenAI client but override the default connection endpoint with the Ollama base URL.
# Define the local Ollama API endpoint
OLLAMA_BASE_URL = "http://localhost:11434/v1"
# if doesn't work try using - "http://localhost:11434"
# Initialize OpenAI client with the custom base URL for Ollama
client = OpenAI(base_url=OLLAMA_BASE_URL)
Request Flow Diagram
This architecture illustrates how the application uses the standard OpenAI library to connect to the local Ollama server.
graph TD
A[Application Python Script] -- Uses OpenAI Client --> B(OpenAI Class)
B -- Connects to Custom Base URL --> C(Ollama Server @ http://localhost:11434/v1)
C -- Hosts Local Models --> D(Local LLM e.g., gemma3:1b)
D -- Generates Response --> C
C -- Returns Response --> A
Test the Connection
Below code is a base minimum application, which connects with the local Ollama LLMs, sends a prompt, and receives a response.
# Define the model to use and the user's prompt
MODEL_NAME = "gemma3:1b"
payload = [{"role": "user", "content": "Tell me a fun fact about programmers."}]
# Send the request and read the response from the LLM model
response = client.chat.completions.create(model=MODEL_NAME, messages=payload)
# Print the generated content
print(response.choices[0].message.content)
Output Stream:
Okay, here’s a fun fact about programmers:
**Computers don’t truly *understand* code. They simply translate it into instructions that their hardware can execute.**
Think of it like a very complex robot. The robot follows the instructions we give it, but it doesn't *know* what it's doing. Programming is all about crafting those instructions in a way that a computer can follow effectively!
---
Want to know another fun fact?
Key points
Configuration Management: For more robust and scalable applications, always manage the configurations eg. OLLAMA_BASE_URL and MODEL_NAME as environment variables in .env file, similar to the OPENAI_API_KEY.
- This makes switching between local (Ollama) and cloud (OpenAI) environments or LLM Models much cleaner without modifying the working and tested application code.