Modular API
This is an example code with fail safes on sending a post request to Modular AI chat completion endpoint:
If you want, you can remove the provider option
import requests
# Replace with your actual API endpoint URL
API_ENDPOINT = 'https://258f3w-8080.csb.app/v1/chat/completions'
# Replace with your actual API key
API_KEY = 'mdl-XXXXXXXXXXXXX'
# Sample data to send in the POST request
data = {
'model': 'gpt-3.5-turbo', # Choose any model from modular ai model list
'messages': [
{"role": "user", "content": "Hello"}
],
'provider': 'A valid Provider name' # Replace with the provider you wish to use, or remove this line for the default provider
}
# Headers and fail safe
headers = {
'API-Key': API_KEY,
'Content-Type': 'application/json'
}
try:
# Sending a POST request to the API endpoint
response = requests.post(API_ENDPOINT, json=data, headers=headers)
# Check if the request was successful (status code 200)
if response.status_code == 200:
print("Request successful")
print("Response:")
print(response.json())
else:
print(f"Request failed with status code {response.status_code}")
print(response.text)
except requests.RequestException as e:
print(f"Request failed: {str(e)}")
Generating an Image using python for prodia
import requests
# URL of your endpoint
url = 'https://258f3w-8080.csb.app/v1/imagine/prodia' # Replace with your actual endpoint URL
# Your API key
api_key = 'mdl-XXXXXXXXXXXXXXXX'
# Sample data for the request
data = {
'prompt': 'ferrari', #add your custom prompt
'model': 'absolutereality_v181.safetensors [3d9d4d2b]', #choose your model from the model list
'negative_prompt': 'ugly', #add negative prompt
'sampler': 'Euler a',
}
# Sending a POST request with the API key in the headers
headers = {
'API-Key': api_key
}
response = requests.post(url, json=data, headers=headers)
# Displaying the response content
print(response.text)
Generating an image for crayionv3
import requests
# URL of the Flask API endpoint
api_url = "https://modularai.omiiidev.repl.co/v1/imagine/craiyon" # Replace this with your actual API URL
# JSON payload with model, prompt, and negative_prompt
payload = {
"model": "photo",
"negative_prompt": "Ugly, disformed, broken",
"prompt": "A cool sports car next to a house"
}
# Replace 'YOUR_API_KEY' with your actual API key
headers = {
"Api-Key": "mdl-XXXXXXXXXXXXXXXXXXXXXX",
"Content-Type": "application/json"
}
# Send a POST request to the API endpoint with headers
response = requests.get(api_url, json=payload, headers=headers)
# Process the response
if response.status_code == 200:
response_data = response.json()
print("Response:")
print(response_data)
else:
print(f"Failed to fetch images. Status code: {response.status_code}")
Using imagine v2:
import requests
# endpoint URL
url = "https://d39a247b-cc2f-462d-87e2-1b6edbe8bb2a-00-6bgmqmbcs7o0.janeway.replit.dev/generate_image" # Replace with your Flask app's URL
# User-provided parameters
user_data = {
"models": ["ICBINP - I Can't Believe It's Not Photography"], # choose a model
"nsfw": False,
"karras": True,
"seed_variation": 1000,
"prompt": "super cool app",
"steps": 30,
"sampler_name": "k_euler",
"width": 512,
"height": 512,
"cfg_scale": 7
}
# Sending a POST request to the Flask app
response = requests.post(url, json=user_data)
# Checking the response
if response.status_code == 200:
result = response.json()
image_url = result.get("image_url")
if image_url:
print("Generated Image URL:", image_url)
else:
print("Error: No image URL found in the response.")
else:
print(f"Error: Failed to generate image. Status Code: {response.status_code}")
print("Response JSON:", response.json())
Text AI:
/v1/models (list models)
/v1/chat/completions (chat completion)
Image AI:
/v1/imagine/prodia (generate images)
/v1/imagine/prodia/models (image generation ai models for prodia)
/v1/imagine/crayion (generate images)
/v1/tss (generate Text to speech)
Image AI:
/v2/imagine/models (image generation ai models)
/v2/imagine (generate images)
/v2/predict (generate image)
/v2/predict/models (get image models)
/v2/dream (generate image)
(Didn't add the API key in this code)
import requests
baseurl = "https://258f3w-8080.csb.app/v2/predict/"
# User-defined payload with model selection
user_payload = {
"width": 1024,
"height": 1024,
"prompt": "car",
"version": "642dcbffc8e8a3522242a4d4b8f3159470bf86a3eb7ccf852337cdc6440698c2" # Replace with the desired model version
}
# Send POST request to Flask server
try:
response = requests.post(baseurl, json=user_payload)
response_data = response.json()
if "error" in response_data:
print(f"Error: {response_data['error']}")
else:
output_value = response_data.get("output_value")
print(f"Output Value: {output_value}")
except requests.exceptions.RequestException as e:
print(f"Error in request: {e}")
(Didn't add the API key in this code)
import requests
api_url = "https://258f3w-8080.csb.app/v1/imagine/craiyon"
# image customizations
payload = {
"model": "photo",
"negative_prompt": "Ugly, disformed, broken",
"prompt": "Cartoon of a man with a hat"
}
# Replace 'YOUR_API_KEY' with your actual API key
headers = {
"API-Key": "YOUR_API_KEY",
"Content-Type": "application/json"
}
# Send a POST request to the API endpoint with headers
response = requests.get(api_url, json=payload, headers=headers)
# Process the response
if response.status_code == 200:
response_data = response.json()
print("Response:")
print(response_data)
else:
print(f"Failed to fetch images. Status code: {response.status_code}")
Use the TTS:
import requests
# Replace 'your_api_key' with the actual API key
api_key = 'your_api_key'
# Base URL of the Flask app
base_url = 'https://258f3w-8080.csb.app/'
# Example: TTS data
tts_data = {
'text': 'your prompt here'
}
response = requests.post(f'{base_url}/v1/tts', json=tts_data, headers={'API-Key': api_key})
# Check if the request was successful (status code 200)
if response.status_code == 200:
try:
# Try to parse the response as JSON
result = response.json()
print(result)
except requests.exceptions.JSONDecodeError:
# If parsing fails, print the raw response text
print(response.text)
else:
# Print an error message if the request was not successful
print(f'Error: {response.status_code}, {response.text}')
Generate an Image using dalle 3 or studio ghibli
import requests
url = "https://258f3w-8080.csb.app/v2/dalle/"
payload = {
"model": "Your Model Here",
"prompt": "You Prompt Here"
}
headers = {
"Content-Type": "application/json",
"API-Key": "mdl-XXXXXXXXXXXXXXXXXX" # Replace your api key
}
try:
response = requests.post(url, json=payload, headers=headers)
print(response.text)
except Exception as e:
print(f"Error: {e}")
Generate an Image (API Key not added yet)
import requests
base_url = "https://258f3w-8080.csb.app/v2/imagine/"
# User-provided parameters
user_data = {
"models": ["ICBINP - I Can't Believe It's Not Photography"],
"nsfw": False,
"karras": True,
"seed_variation": 1000,
"prompt": "cyborg",
"steps": 30,
"sampler_name": "k_euler",
"width": 640,
"height": 512,
"cfg_scale": 7
}
# Sending a POST request
response = requests.post(base_url, json=user_data)
# Checking the response
if response.status_code == 200:
result = response.json()
image_url = result.get("image_url")
if image_url:
print("Generated Image URL:", image_url)
else:
print("Error: No image URL found in the response.")
else:
print(f"Error: Failed to generate image. Status Code: {response.status_code}")
print("Response JSON:", response.json())
Modular G4F API (still works)
You can test the available models that can be found here (in python)
This code is just an example (there are easier ways to do this)
If you know how to code, all you have to do is this:
1. Import requests
2. Send a post request the endpoint url: /v1/chat/completions
3. Add in the API Key (optional)
4. Add the variables for the model, user prompt temperature etc...
5. Then make it print out the response!
/v1/chat/completions
API endpoint to message the models
/v1/models
API endpoint to view all models (not all are available)
Endpoint URL: https://modualrapi.onrender.com/
Command to use Modular API with terminal copilot:
interpreter --api_base https://258f3w-8080.csb.app/ --api_key "your-api-key" --model openai/gpt-3.5-turbo --context_window 300000