Companies have created APIs to access their large language models. Here we illustrate how to use the Gemini API from Google and GPT from OpenAI. Here we’ll show you how to do this on colab where we use its facility for storing secrets. If you’re using you need to find an appropriate way to store your API keys securely. My recommendation is to use environment variables. Never hardcode your API keys in your code, share them publicly, or commit them to version control. People create bots that scan public repos for API keys to exploit them.
Using Gemini API calls¶
First go to aistudio.google.com
Create an API key, save this key as a secret in Colab
The following may not be necessary for free tier
You have to sign up for a google cloud account
Create a project
Enable billing! (Note this cost money so set up an account budget and alerts)
Pricing can be found here
Again, make sure to store your API key securely! and make sure that you aren’t creating a large bill for yourself! (Google has some free tier usage but be careful.)
First, let’s install the Gemini client library. Run this cell in your notebook (only needs to be run once):
# @title Install the generative ai interface
!pip install -q -U google-generativeaiThe following fetches the API key and then enters it into the gemini client configuration.
# @title Set up your key with the colab notebook session
import google.generativeai as genai
from google.colab import userdata
# Retrieve your API key from Colab's Secrets
GOOGLE_API_KEY = userdata.get('GEMINI_API_KEY')
genai.configure(api_key=GOOGLE_API_KEY)The following lists the available models. Refer back to the model list from Gemini to see which models are available to you and their pricing.
# @title list available commands
for m in genai.list_models():
if "generateContent" in m.supported_generation_methods:
print(m.name)models/gemini-1.0-pro-vision-latest
models/gemini-pro-vision
models/gemini-1.5-pro-latest
models/gemini-1.5-pro-002
models/gemini-1.5-pro
models/gemini-1.5-flash-latest
models/gemini-1.5-flash
models/gemini-1.5-flash-002
models/gemini-1.5-flash-8b
models/gemini-1.5-flash-8b-001
models/gemini-1.5-flash-8b-latest
models/gemini-2.5-pro-preview-03-25
models/gemini-2.5-flash-preview-05-20
models/gemini-2.5-flash
models/gemini-2.5-flash-lite-preview-06-17
models/gemini-2.5-pro-preview-05-06
models/gemini-2.5-pro-preview-06-05
models/gemini-2.5-pro
models/gemini-2.0-flash-exp
models/gemini-2.0-flash
models/gemini-2.0-flash-001
models/gemini-2.0-flash-exp-image-generation
models/gemini-2.0-flash-lite-001
models/gemini-2.0-flash-lite
models/gemini-2.0-flash-preview-image-generation
models/gemini-2.0-flash-lite-preview-02-05
models/gemini-2.0-flash-lite-preview
models/gemini-2.0-pro-exp
models/gemini-2.0-pro-exp-02-05
models/gemini-exp-1206
models/gemini-2.0-flash-thinking-exp-01-21
models/gemini-2.0-flash-thinking-exp
models/gemini-2.0-flash-thinking-exp-1219
models/gemini-2.5-flash-preview-tts
models/gemini-2.5-pro-preview-tts
models/learnlm-2.0-flash-experimental
models/gemma-3-1b-it
models/gemma-3-4b-it
models/gemma-3-12b-it
models/gemma-3-27b-it
models/gemma-3n-e4b-it
models/gemma-3n-e2b-it
Now let’s try generating some response text from a prompt. Notice we give a specific model from our model list and then generate the response witih model.generate_content.
# @title try running a command
model = genai.GenerativeModel('gemini-1.5-flash') # Or gemini-1.5-pro, etc.
response = model.generate_content("List out the downsides of having a woodchuck as a pet")
print(response.text)Woodchucks, while undeniably cute, are not ideal pets for most people. Here are some significant downsides:
* **Wild Animal Behavior:** Woodchucks are wild animals. They retain strong instincts to burrow, forage, and defend their territory. This means they're unlikely to be cuddly or easily trained like a domesticated animal. Expect digging, chewing, and potentially aggressive behavior if they feel threatened.
* **Extensive Housing Requirements:** A suitable enclosure for a woodchuck is far larger than a typical pet cage. You'd need a sizable, secure outdoor enclosure with plenty of space for digging, climbing, and hiding. This requires significant space, time, and money to build and maintain. Escape-proofing is paramount.
* **Specialized Diet:** Feeding a woodchuck correctly requires research and commitment. Their diet consists of grasses, plants, fruits, and vegetables – not standard pet store food. Providing a balanced diet and ensuring access to fresh water is crucial.
* **Veterinary Care:** Finding a veterinarian experienced with woodchucks can be challenging. Treatment costs can be high, and illness in a wild animal can be difficult to diagnose and treat.
* **Legal Restrictions:** Keeping wild animals as pets is often restricted by local laws and regulations. You may need permits or licenses, and it's important to comply with all applicable regulations.
* **Disease Risk:** Woodchucks can carry diseases, such as Lyme disease, that can be transmitted to humans. Proper hygiene and sanitation are essential, but risk remains.
* **Smell:** Woodchucks, especially males, can have a strong musky odor.
* **Destructive Behavior:** Their natural instinct to dig and chew can cause significant damage to your property if not contained properly.
* **Limited Interaction:** Unlike dogs or cats, you'll likely have limited opportunities for genuine interaction and bonding with a woodchuck.
* **Ethical Considerations:** Some people argue against keeping wild animals as pets, believing it's unnatural and potentially harmful to their well-being. Removing them from their natural habitat can negatively impact their overall health and longevity.
In short, while the novelty might be appealing, the responsibilities and challenges of keeping a woodchuck as a pet far outweigh the potential rewards for most people. It's generally recommended to appreciate these animals in their natural habitat rather than attempting to domesticate them.
Now let’s see how to generat embeddings using Gemini.
# Choose an embedding model
embedding_model = 'text-embedding-004' # A good general-purpose embedding model
# Text to embed
text_to_embed = "The quick brown fox jumps over the lazy dog."
# Get the embedding
response = genai.embed_content(
model=embedding_model,
content=text_to_embed,
#task_type=types.TaskType.SEMANTIC_SIMILARITY # Example task type
)And here’s the output. Notice it’s 768 dimensional.
print(len(response['embedding']))
print(response['embedding'][0 : 9])
768
[-0.06261901, 0.008358474, 0.020931892, 0.023453966, -0.03660129, 0.033054803, 0.016852979, 0.036087364, 0.047807004]
Open AI¶
Here we show how to use the OpenAI API to access GPT models. First, you need to sign up for an account at OpenAI and create an API key. Store this key securely as mentioned above. I’m specifically using 0.28.1 here. Instructions for both OpenAI and Gemini may have changed since this was written so refer to their documentation for the latest instructions.
!pip install openai==0.28.1Here we get the key from our colab secrets.
import openai
gpt_key = userdata.get('a2cps_gpt_api_key')
Now making the call to OpenAI is easy.
prompt = "list out the benefits of having a woodchuck as a pet"
openai.api_key = gpt_key
# Call the ChatCompletion endpoint
response = openai.ChatCompletion.create(
model="gpt-4o", #other examples: "gpt-3.5-turbo","gpt-4o", "gpt-3.5-turbo"
messages=[
# you can add system level instructions here; I'm omitting for simplicity
#{"role": "system", "content": text_content},
{"role": "user", "content": prompt2}
],
## Temperature controls randomness of output
temperature=0.7,
## The maximum number of tokens to generate in the completion
## Remember billing is based on input + output tokens totals
max_tokens=4096
)
# Print the response
print(response['choices'][0]['message']['content'])