Giter Club home page Giter Club logo

generative-ai-python's People

Contributors

a-stupid-sun avatar aertoria avatar aidoskanapyanov avatar andy963 avatar eavanvalkenburg avatar ftnext avatar hamza-nabil avatar hu-po avatar jybaek avatar kaycebasques avatar keertk avatar lll-lll-lll-lll avatar luo-anthony avatar markdaoust avatar markmcd avatar mayureshagashe2105 avatar mpursley avatar nbro avatar rajveer43 avatar random-forests avatar sbagri avatar shilpakancharla avatar thesanju avatar tymichaelchen avatar waltforme avatar xnny avatar yebowenhu avatar yeuoly avatar yihong0618 avatar ymodak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

generative-ai-python's Issues

google.api_core.exceptions.ServiceUnavailable: 503 The service is currently unavailable.

Description of the bug:

I met the error message google.api_core.exceptions.ServiceUnavailable: 503 The service is currently unavailable. frequently. Especially when I try to call the API multiple times using a loop. Is this a bug or is there a way to solve it? Thanks!

Actual vs expected behavior:

Should generate text or chat correctly but frequently have the error google.api_core.exceptions.ServiceUnavailable: 503 The service is currently unavailable.

Any other information you'd like to share?

No response

google.api_core.exceptions.RetryError: Deadline of 60.0s exceeded while calling target function, last exception: 503 Getting metadata from plugin failed with error: ('invalid_grant: Bad Request', {'error': 'invalid_grant', 'error_description': 'Bad Request'})

Description of the bug:

package verison: 0.3.1

  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/google/api_core/grpc_helpers.py", line 65, in error_remapped_callable
    return callable_(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/grpc/_channel.py", line 1161, in __call__
    return _end_unary_response_blocking(state, call, False, None)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/grpc/_channel.py", line 1004, in _end_unary_response_blocking
    raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
        status = StatusCode.UNAVAILABLE
        details = "Getting metadata from plugin failed with error: ('invalid_grant: Bad Request', {'error': 'invalid_grant', 'error_description': 'Bad Request'})"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2023-12-13T23:10:10.374012-08:00", grpc_status:14, grpc_message:"Getting metadata from plugin failed with error: (\'invalid_grant: Bad Request\', {\'error\': \'invalid_grant\', \'error_description\': \'Bad Request\'})"}"
>

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/google/api_core/retry.py", line 191, in retry_target
    return target()
           ^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/google/api_core/timeout.py", line 120, in func_with_timeout
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/google/api_core/grpc_helpers.py", line 67, in error_remapped_callable
    raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.ServiceUnavailable: 503 Getting metadata from plugin failed with error: ('invalid_grant: Bad Request', {'error': 'invalid_grant', 'error_description': 'Bad Request'})

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/defalt/Desktop/Athena/research/swarms/gemini.py", line 23, in <module>
    response = model.generate_content("The opposite of hot is")
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/google/generativeai/generative_models.py", line 243, in generate_content
    response = self._client.generate_content(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/google/ai/generativelanguage_v1beta/services/generative_service/client.py", line 566, in generate_content
    response = rpc(
               ^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/google/api_core/gapic_v1/method.py", line 113, in __call__
    return wrapped_func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/google/api_core/retry.py", line 349, in retry_wrapped_func
    return retry_target(
           ^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/google/api_core/retry.py", line 207, in retry_target
    raise exceptions.RetryError(
google.api_core.exceptions.RetryError: Deadline of 60.0s exceeded while calling target function, last exception: 503 Getting metadata from plugin failed with error: ('invalid_grant: Bad Request', {'error': 'invalid_grant', 'error_description': 'Bad Request'})

Actual vs expected behavior:

No response

Any other information you'd like to share?

No response

count_tokens method not working

Description of the bug:

the count_tokens method in the genai.GenerativeModel class is not working. it is showing
Error:
count_tokens() takes from 1 to 2 positional arguments but 3 were given

Actual vs expected behavior:

genai.configure(api_key=api_key.GOOGLE_API_KEY)
model = genai.GenerativeModel('gemini-pro')
#test apikey
for m in genai.list_models():
  if 'generateContent' in m.supported_generation_methods:
    print(m.name)
response = model.generate_content("Is sky blue?")
print(response.text)
print(model.count_tokens("hello"))

Output:

models/gemini-pro
models/gemini-pro-vision
In general, the sky appears blue during the day due to a phenomenon known as Rayleigh scattering. Here's why:

* **Sunlight:** Sunlight consists of a range of colors, including blue, green, yellow, orange, red, and violet.

* **Wavelength:** Each color of light has a specific wavelength. Blue light has a shorter wavelength compared to other colors in the visible spectrum.

* **Rayleigh Scattering:** When sunlight enters the Earth's atmosphere, it interacts with molecules of nitrogen and oxygen. These molecules are much smaller than the wavelength of visible light. When sunlight encounters these molecules, it undergoes a process called Rayleigh scattering.

* **Scattering of Blue Light:** Rayleigh scattering causes the shorter wavelength blue light to scatter more than other colors of light. This means that blue light is scattered in all directions, including towards our eyes. As a result, we perceive the sky as blue during the daytime.

* **Other Factors:** The exact shade of blue we see can vary depending on factors such as the time of day, weather conditions, and the presence of particles in the atmosphere like dust, smoke, or clouds. These factors can affect the amount of scattering and the intensity of the blue color.

It's important to note that the sky is not inherently blue. The blue color we see is a result of the interaction between sunlight and the Earth's atmosphere. This phenomenon is what gives us our beautiful blue skies during the daytime.

{
	"name": "TypeError",
	"message": "count_tokens() takes from 1 to 2 positional arguments but 3 were given",
	"stack": "---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[21], line 9
      7 response = model.generate_content(\"Is sky blue?\")
      8 print(response.text)
----> 9 print(model.count_tokens(\"hello\"))

File ~/Library/Python/3.9/lib/python/site-packages/google/generativeai/generative_models.py:278, in GenerativeModel.count_tokens(self, contents)
    274 def count_tokens(
    275     self, contents: content_types.ContentsType
    276 ) -> glm.CountTokensResponse:
    277     contents = content_types.to_contents(contents)
--> 278     return self._client.count_tokens(self.model_name, contents)

TypeError: count_tokens() takes from 1 to 2 positional arguments but 3 were given"
}

Any other information you'd like to share?

No response

How can I use Chinese as input

import pprint
import google.generativeai as palm
import pprint

palm.configure(api_key='MY_API_KEY')

# inpu as input
inpu = "你好"
response = palm.chat(messages=inpu)

print(response)

this is my code ,and how can i do if i want use Chinese as input
when i use Chinese as input it returns none and the filters give me a blocked reason like the image show
theproblem

Set a role for GenerativeModel (gemini-pro) with Python SDK?

Description of the feature request:

Hey is there rly no chance to set a role for the gemini pro model when using the python SDK? I tried to edit the Role in the source code but then my request will be allways blocked.
Also checked the docs from gemini API and vertex but dant find anything -> play around also not keeping me to the goal.

What problem are you trying to solve with this feature?

To set a role for gemini using the vertex lib (GenerativeModel("gemini-pro"))

Any other information you'd like to share?

No response

Async embed_content

Description of the feature request:

GLAPI supports embed_content through the async client, so we should support it through the SDK too.

What problem are you trying to solve with this feature?

Embedding content, asynchronously.

Any other information you'd like to share?

No response

How to interpret "safety_feedback"

Hi, I am getting this error upon trying to use genai.generate_text()

Completion(
    candidates=[], 
    result=None, 
    filters=[{'reason': <BlockedReason.SAFETY: 1>}], 
    safety_feedback=[{
        'rating': {'category': <HarmCategory.HARM_CATEGORY_TOXICITY: 2>, 'probability': <HarmProbability.LOW: 2>},
        'setting': {'category': <HarmCategory.HARM_CATEGORY_TOXICITY: 2>, 'threshold': <HarmBlockThreshold.BLOCK_LOW_AND_ABOVE: 1>}}])

Nothing in my prompt is in the least bit toxic, it's just frontend code. Can you please tell me what input args to give to generate_text so it stops blocking? I tried to mess with safety_settings but it's not well documented and I didn't succeed.

AttributeError: module 'google.generativeai' has no attribute 'generate_content'

Description of the bug:

I ran in to this Error
AttributeError: module 'google.generativeai' has no attribute 'generate_content'

import os
import google.generativeai as genai
genai.configure(api_key=os.getenv('GEMINI_API_KEY'))

prompt = "Write a story about a magic backpack."

response = genai.generate_content(
model="gemini-pro",
prompt=prompt
)

Actual vs expected behavior:

No response

Any other information you'd like to share?

Python 3.9.0
google-generativeai==0.3.2

How does google.generativeai.count_message_tokens work?

Hi there,
I tried to count the token number of a text using google.generativeai.count_message_tokens according to the following code.
I got 12 for the input "a".
Does google.generativeai.count_message_tokens include something besides a prompt, which is now "a" ?
I would appreciate any comments.

Thanks.

import google.generativeai as palm
import os
palm_api_key = os.getenv("PALM_API_KEY")

palm.configure(api_key=palm_api_key)

def count_tokens(string):
    res = palm.count_message_tokens(model='models/chat-bison-001', prompt=string)
    return res['token_count']

print(count_tokens('a'))
# 12

<BlockedReason.OTHER: 2> with simple questions

Palm has been working pretty well until I randomly started running into this error: filters=[{'reason': <BlockedReason.OTHER: 2>}], top_p=0.95, top_k=40). On the PaLM documentation it says blocked reason other 2 is an unspecified filter, and I don't know what that would be. I am using palm.chat with this example: [ "what is your name", "my name is al" ]. the context is "your name is al." however, when I ask the model "what is your name", it gives me that error immediately.

IPv6 preventing API access

Using curl in the terminal works just fine.

curl \
-H 'Content-Type: application/json' \
-d '{ "prompt": { "text": "Write a story about a magic backpack"} }' \
"https://generativelanguage.googleapis.com/v1beta2/models/text-bison-001:generateText?key=<my_api_key>"

The above works just fine, however, when I try:

import google.generativeai as palm

palm.configure(api_key=<my_api_key>)
result = palm.generate_text(prompt="The opposite of hot is")
print(result)

Results in the following error:

Traceback (most recent call last):
  File "/home/gene/webhook/venv/lib/python3.10/site-packages/google/api_core/grpc_helpers.py", line 72, in error_remapped_callable
    return callable_(*args, **kwargs)
  File "/home/gene/webhook/venv/lib/python3.10/site-packages/grpc/_channel.py", line 1030, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/home/gene/webhook/venv/lib/python3.10/site-packages/grpc/_channel.py", line 910, in _end_unary_response_blocking
    raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
        status = StatusCode.UNAVAILABLE
        details = "403:Forbidden"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"403:Forbidden", grpc_status:14, created_time:"2023-05-25T16:47:54.953534672+00:00"}"
>

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/gene/webhook/create_youtubedb.py", line 329, in <module>
    test2()
  File "/home/gene/webhook/create_youtubedb.py", line 326, in test2
    return palm.generate_text(prompt="The opposite of hot is")
  File "/home/gene/webhook/venv/lib/python3.10/site-packages/google/generativeai/text.py", line 139, in generate_text
    return _generate_response(client=client, request=request)
  File "/home/gene/webhook/venv/lib/python3.10/site-packages/google/generativeai/text.py", line 159, in _generate_response
    response = client.generate_text(request)
  File "/home/gene/webhook/venv/lib/python3.10/site-packages/google/ai/generativelanguage_v1beta2/services/text_service/client.py", line 641, in generate_text
    response = rpc(
  File "/home/gene/webhook/venv/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py", line 113, in __call__
    return wrapped_func(*args, **kwargs)
  File "/home/gene/webhook/venv/lib/python3.10/site-packages/google/api_core/grpc_helpers.py", line 74, in error_remapped_callable
    raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.ServiceUnavailable: 503 403:Forbidden

Import error with python 3.8

Hi,

I got the below error when importing this package with the latest version (0.1.0rc2).
Did I miss anything?

>>> import google.generativeai
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/conda/envs/chatbot/lib/python3.8/site-packages/google/generativeai/__init__.py", line 59, in <module>
    from google.generativeai import types
  File "/opt/conda/envs/chatbot/lib/python3.8/site-packages/google/generativeai/types/__init__.py", line 19, in <module>
    from google.generativeai.types.text_types import *
  File "/opt/conda/envs/chatbot/lib/python3.8/site-packages/google/generativeai/types/text_types.py", line 34, in <module>
    class Completion(abc.ABC):
  File "/opt/conda/envs/chatbot/lib/python3.8/site-packages/google/generativeai/types/text_types.py", line 49, in Completion
    filters: Optional[list[safety_types.ContentFilterDict]]
TypeError: 'type' object is not subscriptable

google.generativeai.chat does not support max_output_tokens

We want to limit the reply length of chat responses, but google.generativeai.chat does not appear to support the max_output_tokens parameter. I'm not sure whether this is just not implemented yet, or an API limitation, or something else, but the vertexai Python SDK Chat model appears to support it (see Vertex AI Chat model parameters) and so does the google.generativeai.generate_text function.

I had thought that perhaps max_output_tokens wasn't supported in chat, just text generation, but this doc clearly shows it being used in a chat:

chat = chat_model.start_chat(
    context="My name is Ned. You are my personal assistant. My favorite movies are Lord of the Rings and Hobbit.",
    examples=[
        InputOutputTextPair(
            input_text="Who do you work for?",
            output_text="I work for Ned.",
        ),
        InputOutputTextPair(
            input_text="What do I like?",
            output_text="Ned likes watching movies.",
        ),
    ],
    temperature=0.3,
    max_output_tokens=200,
    top_p=0.8,
    top_k=40,
)
print(chat.send_message("Are my favorite movies based on a book series?"))

(It's a bit confusing that Google seems to have two different Python SDKs, this google-generativeai one and google-cloud-aiplatform. Is there any difference if all a developer wants to do is send chat to a model and get responses back?)

Issue with Safety Rating Mechanism in Gemini-Pro Model

Description of the bug:

I am writing to report a bug I encountered while using the Gemini-Pro model. The issue pertains to the safety ratings mechanism, where the input and output safety ratings are inconsistent, specifically for the category of "Sexually Explicit" content.

input
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: NEGLIGIBLE
}

response.prompt_feedback

block_reason: SAFETY
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: HIGH
}
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: NEGLIGIBLE
}

The issue is that despite providing negligible probabilities for all harm categories, including "Sexually Explicit" content, the model's output unexpectedly rates the "Sexually Explicit" category as high. This seems to be an error in the model's safety rating system, as the input data does not align with the output.

I believe this to be a bug in the API's review mechanism and would appreciate it if your team could investigate and resolve this issue promptly. The accuracy and reliability of safety ratings are crucial for my usage of the Gemini-Pro model.

Actual vs expected behavior:

No response

Any other information you'd like to share?

No response

encountered a small bug when I tried to use Gemini API

Description of the bug:

I ran the example code on https://ai.google.dev/tutorials/python_quickstart:

import google.generativeai as genai
import os
os.environ["GOOGLE_API_KEY"] = "My API key"

genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))

for m in genai.list_models():
  if 'generateContent' in m.supported_generation_methods:
    print(m)

and I got:

Traceback (most recent call last):
  File "C:\Users\zjk\Desktop\Gemini.py", line 10, in <module>
    for m in genai.list_models():
  File "D:\anaconda3\envs\python3.9\lib\site-packages\google\generativeai\models.py", line 165, in list_models
    model = type(model).to_dict(model)
AttributeError: to_dict

Actual vs expected behavior:

I didn't successfully run the code, but I guess I was supposed to see

gemini-pro
gemini-pro-vision

as the documentation shows
image

Any other information you'd like to share?

google-generativeai version: 0.3.2
python version: 3.9.15

Empty Chat Responses

I'm noticing that a few simple questions are not getting responses why I try to use them with the chat methods. For example:

palm.chat(messages='Why is the sky blue?')
> ChatResponse(model='models/chat-bison-001', context='', examples=[], messages=[{'author': '0', 'content': 'Why is the sky blue?'}, None], temperature=None, candidate_count=None, candidates=[], top_p=None, top_k=None, filters=[])

The same question works fine with generate_text

palm.generate_text(model=model, prompt='Why is the sky blue?')
> Completion(candidates=[{'output': 'The sky is blue due to a phenomenon called Rayleigh scattering. This is the scattering of light by particles that are smaller than the wavelength of light. The particles in the atmosphere that cause Rayleigh scattering are molecules of nitrogen and oxygen.\n\nWhen sunlight hits these molecules, it is scattered in all directions. However, blue light is scattered more than other colors because it has a shorter wavelength. This is why the sky appears blue during the day.\n\nThe amount of scattering depends on the amount of particles in the atmosphere. This is why the sky appears darker at sunrise and sunset, when there are more particles in the air.\n\nThe sky can also appear blue at other times, such as when there is dust or moisture in the air. This is because these particles can also scatter sunlight.', 'safety_ratings': [{'category': <HarmCategory.HARM_CATEGORY_DEROGATORY: 1>, 'probability': <HarmProbability.NEGLIGIBLE: 1>}, {'category': <HarmCategory.HARM_CATEGORY_TOXICITY: 2>, 'probability': <HarmProbability.NEGLIGIBLE: 1>}, {'category': <HarmCategory.HARM_CATEGORY_VIOLENCE: 3>, 'probability': <HarmProbability.NEGLIGIBLE: 1>}, {'category': <HarmCategory.HARM_CATEGORY_SEXUAL: 4>, 'probability': <HarmProbability.NEGLIGIBLE: 1>}, {'category': <HarmCategory.HARM_CATEGORY_MEDICAL: 5>, 'probability': <HarmProbability.NEGLIGIBLE: 1>}, {'category': <HarmCategory.HARM_CATEGORY_DANGEROUS: 6>, 'probability': <HarmProbability.NEGLIGIBLE: 1>}]}], result='The sky is blue due to a phenomenon called Rayleigh scattering. This is the scattering of light by particles that are smaller than the wavelength of light. The particles in the atmosphere that cause Rayleigh scattering are molecules of nitrogen and oxygen.\n\nWhen sunlight hits these molecules, it is scattered in all directions. However, blue light is scattered more than other colors because it has a shorter wavelength. This is why the sky appears blue during the day.\n\nThe amount of scattering depends on the amount of particles in the atmosphere. This is why the sky appears darker at sunrise and sunset, when there are more particles in the air.\n\nThe sky can also appear blue at other times, such as when there is dust or moisture in the air. This is because these particles can also scatter sunlight.', filters=[], safety_feedback=[])

Not sure if its the model or the client, but how can developers generally go about trouble shooting this? Is there any way to look at the raw api response?

how to convert resp to dict or json?

Description of the feature request:

hi, i have a question
i write this code but this 'resp' not JSON serializable

genai.configure(api_key=apiKey)
    model = genai.GenerativeModel("gemini-pro")   
    resp = model.generate_content("tell me a joke")
    b=resp.candidates
    

or resp._result

how to convert resp to dict or json?
thanks for help ❤️

What problem are you trying to solve with this feature?

No response

Any other information you'd like to share?

No response

Add option to set a client for GenerativeModel

Description of the feature request:

Currently, the Python Quickstart documentation for using the Gemini API suggests setting the API key in the default client configuration.

import google.generativeai as genai

genai.configure(api_key=GOOGLE_API_KEY)
model = genai.GenerativeModel('gemini-pro')
response = model.generate_content("What is the meaning of life?")
print(response.text) # Model response gets printed

In addition to this, the package should also give an option to set a different key when creating the model. Something like:

model = genai.GenerativeModel('gemini-pro', api_key=OTHER_GOOGLE_API_KEY)

Currently, a workaround is to manually set the private _client variable for the model

import google.generativeai as genai
from google.ai import generativelanguage as glm

client = glm.GenerativeServiceClient(
    client_options={'api_key':OTHER_GOOGLE_API_KEY})
model = genai.GenerativeModel('gemini-pro')
model._client = client
response = model.generate_content("What is the meaning of life?")
print(response.text) # Model response gets printed

What problem are you trying to solve with this feature?

The current implementation requires the API key to be set globally. Allowing the keys to be set with a smaller scope will be useful for applications that use multiple API keys concurrently for different tasks.

Any other information you'd like to share?

One potential implementation for this feature could be to take an additional keyword argument for the client and set it as the _client during the __init__ method of the GenerativeModel class.

# google/generative_ai/generative_models.py
class GenerativeModel:
    def __init__(
        self,
        model_name: str = "gemini-m",
        safety_settings: safety_types.SafetySettingOptions | None = None,
        generation_config: generation_types.GenerationConfigType | None = None,
        client = None, # Additional kwarg
    ):
    ...
    self._client = client
    ...

Then, the user can optionally supply the client with their API key if they do not want the default credentials to be used.

# main.py
import google.generativeai as genai
from google.ai import generativelanguage as glm

client = glm.GenerativeServiceClient(
    client_options={'api_key':OTHER_GOOGLE_API_KEY})
model = genai.GenerativeModel('gemini-pro', client=client)
response = model.generate_content("What is the meaning of life?")
print(response.text) # Model response gets printed

Pydantic throws an error when import types

Description of the bug:

When using classes from types in other classes which use pydantic, pydantic throws this error:
pydantic.errors.PydanticUserError: Please use typing_extensions.TypedDictinstead oftyping.TypedDict on Python < 3.12.

Actual vs expected behavior:

No error

Any other information you'd like to share?

Should be an easy fix to add:

import sys
if sys.version_info >= (3, 12):
    from typing import TypedDict
else:
    from typing_extensions import TypedDict

Retry

Hi,
How to tackle rate limit error using the retry function?

Does generative language api support something to set location?

I find that there is a way to set the location in vertex ai:

from google.cloud import aiplatform
import vertexai
from vertexai.language_models import TextGenerationModel

project_id = "test"
location = "us-central1"

vertexai.init(project=project_id, location=location)

Is there any method to do the same thing in generative language api (this repository)?

module 'google.generativeai' has no attribute 'GenerativeModel'

Description of the bug:

Executing this code
with either
from langchain_google_genai import ChatGoogleGenerativeAI

or

from langchain_google_genai.chat_models import ChatGoogleGenerativeAI

def generate_lecture(topic:str, context:str):
    
  template ="""
  As an accomplished university professor and expert in {topic}, your task is to develop an elaborate, exhaustive, and highly detailed lecture on the subject. 
  Remember to generate content ensuring both novice learners and advanced students can benefit from your expertise.
  while leveraging the provided context
  
  Context: {context} """

  
  prompt = ChatPromptTemplate.from_template(template)

  model = ChatGoogleGenerativeAI(model="gemini-pro", google_api_key=palm_api_key)

 
  response = model.invoke(template)
  return response.content

Actual vs expected behavior:

Actual behaviour
The error below is thrown

  File "C:\Users\ibokk\RobotForge\mvp\service\llm.py", line 80, in generate_lecture
    model = ChatGoogleGenerativeAI(model="gemini-pro", google_api_key=palm_api_key)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ibokk\anaconda3\envs\robotforge\Lib\site-packages\langchain_core\load\serializable.py", line 97, in __init__
    super().__init__(**kwargs)
  File "pydantic\main.py", line 339, in pydantic.main.BaseModel.__init__
  File "pydantic\main.py", line 1102, in pydantic.main.validate_model
  File "C:\Users\ibokk\anaconda3\envs\robotforge\Lib\site-packages\langchain_google_genai\chat_models.py", line 502, in validate_environment
    values["_generative_model"] = genai.GenerativeModel(model_name=model)
                                  ^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'google.generativeai' has no attribute 'GenerativeModel

Expected behaviour

A poor result will be random text generated which is not relevant to the prompt provided. An excellent result will be text generated relevant to the template provided.

Any other information you'd like to share?

OS: Windows 11
Env: Conda 23.7.2, python: 3.11
Package: google-gen-ai : 0.3.1

Firefox: Errors in python quickstart colab.

Description of the bug:

I got this error while trying the Google Colab python quickstart for Gemini AI. There's the following piece of code to list models in the page under "List models"

for m in genai.list_models():
  if 'generateContent' in m.supported_generation_methods:
    print(m.name)

and this is the error I get

ERROR:tornado.access:500 GET /v1beta/models?pageSize=50&%24alt=json%3Benum-encoding%3Dint (127.0.0.1) 2738.11ms

---------------------------------------------------------------------------

InternalServerError                       Traceback (most recent call last)

[<ipython-input-5-77709e92acfe>](https://localhost:8080/#) in <cell line: 1>()
----> 1 for m in genai.list_models():
      2   if 'generateContent' in m.supported_generation_methods:
      3     print(m.name)

7 frames

[/usr/local/lib/python3.10/dist-packages/google/ai/generativelanguage_v1beta/services/model_service/transports/rest.py](https://localhost:8080/#) in __call__(self, request, retry, timeout, metadata)
    826             # subclass.
    827             if response.status_code >= 400:
--> 828                 raise core_exceptions.from_http_response(response)
    829 
    830             # Return the response

InternalServerError: 500 GET http://localhost:46035/v1beta/models?pageSize=50&%24alt=json%3Benum-encoding%3Dint: TypeError: NetworkError when attempting to fetch resource.

Actual vs expected behavior:

Actual result

ERROR:tornado.access:500 GET /v1beta/models?pageSize=50&%24alt=json%3Benum-encoding%3Dint (127.0.0.1) 2738.11ms

---------------------------------------------------------------------------

InternalServerError                       Traceback (most recent call last)

[<ipython-input-5-77709e92acfe>](https://localhost:8080/#) in <cell line: 1>()
----> 1 for m in genai.list_models():
      2   if 'generateContent' in m.supported_generation_methods:
      3     print(m.name)

7 frames

[/usr/local/lib/python3.10/dist-packages/google/ai/generativelanguage_v1beta/services/model_service/transports/rest.py](https://localhost:8080/#) in __call__(self, request, retry, timeout, metadata)
    826             # subclass.
    827             if response.status_code >= 400:
--> 828                 raise core_exceptions.from_http_response(response)
    829 
    830             # Return the response

InternalServerError: 500 GET http://localhost:46035/v1beta/models?pageSize=50&%24alt=json%3Benum-encoding%3Dint: TypeError: NetworkError when attempting to fetch resource.

Expected result

It should list models gemini-pro and gemini-pro-vision.

Any other information you'd like to share?

  1. The code is able to access my API key and I was able to print it.
  2. I tried this with 2 different google accounts and the issue is the same.

Unable to use safety settings in chat mode of chat bison

It would be nice to be able to configure the safety settings for chat bison. This would hopefully lessen getting "none" as a response. It would also help with making examples more reliable, and make chat mode able to register a name.

ModuleNotFoundError: No module named 'google.ai'

Description of the bug:

I'm encountering an error when trying to use the google.generativeai library in my Python script.

import google.generativeai as genai
genai.configure(api_key="MY_API_KEY")
# Set up the model
generation_config = {
  "temperature": 0.0,
  "top_p": 1,
  "top_k": 1,
  "max_output_tokens": 4096,
}

safety_settings = [
  {
    "category": "HARM_CATEGORY_HARASSMENT",
    "threshold": "BLOCK_MEDIUM_AND_ABOVE"
  },
  {
    "category": "HARM_CATEGORY_HATE_SPEECH",
    "threshold": "BLOCK_MEDIUM_AND_ABOVE"
  },
  {
    "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
    "threshold": "BLOCK_MEDIUM_AND_ABOVE"
  },
  {
    "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
    "threshold": "BLOCK_MEDIUM_AND_ABOVE"
  }
]

model = genai.GenerativeModel(model_name="gemini-pro",
                              generation_config=generation_config,
                              safety_settings=safety_settings)

prompt_parts = [
  " You are an AI assistant named 'Translator'. You will be receiving content in various languages. If the material is in English, provide the exact original content; otherwise, translate it into English.",
  "input: Le temps est très beau aujourd'hui, idéal pour jouer au volley-ball à la plage.",
  "ouput: Today's weather is very nice, perfect for playing volleyball on the beach.",
  "input: 明日、一緒に天ぷらを食べに行きませんか?",
  "ouput: How about we go eat tempura together tomorrow?",
  "input: Der Aktienmarkt ist heute um weitere 10% gefallen. Wie kann ich Geld verdienen?",
  "ouput: ",
]

response = model.generate_content(prompt_parts)
print(response.text)

I downloaded the generative-ai-python source code from https://github.com/google/generative-ai-python/archive/refs/tags/v0.3.1.tar.gz. Then I ran the command "python setup.py install" in my python3.8.10 virtual env.

Actual vs expected behavior:

Actual behaviour

When I run the script, I get the following error message:

Traceback (most recent call last):
  File "gemini-api-test.py", line 1, in <module>
    import google.generativeai as genai
  File "/data/env/gemini/lib/python3.8/site-packages/google_generativeai-0.3.1-py3.8.egg/google/generativeai/__init__.py", line 45, in <module>
    from google.generativeai import types
  File "/data/env/gemini/lib/python3.8/site-packages/google_generativeai-0.3.1-py3.8.egg/google/generativeai/types/__init__.py", line 17, in <module>
    from google.generativeai.types.discuss_types import *
  File "/data/env/gemini/lib/python3.8/site-packages/google_generativeai-0.3.1-py3.8.egg/google/generativeai/types/discuss_types.py", line 21, in <module>
    import google.ai.generativelanguage as glm
ModuleNotFoundError: No module named 'google.ai'

Expected behaviour

The script should run without errors and generate translations for the provided prompts.

Any other information you'd like to share?

OS: Ubuntu 20.04
Env: virtualenv, Python:3.8.10
Package:
Name: google-generativeai
Version: 0.3.1
Summary: Google Generative AI High level API client library and tools.
Home-page: https://github.com/google/generative-ai-python
Author: Google LLC
Author-email: [email protected]
License: Apache 2.0
Location: /data/env/gemini/lib/python3.8/site-packages/google_generativeai-0.3.1-py3.8.egg
Requires: google-ai-generativelanguage, google-api-core, google-auth, protobuf, tqdm

API key from MakerSuite Access Denied

Description of the bug:

When getting an API-Key I get a message that the access restricted. I think I would need that Key to be able to install and use the API. Is it possibly my Administartor who is blocking me? I would need to try this on my personal laptop.

Actual vs expected behavior:

No response

Any other information you'd like to share?

No response

ModuleNotFoundError - GoogleAI, Pyinstaller

Description of the bug:

I am building a simple app. When I try to make the .py to exe using no matter which tool Pyinstaller ,py2exe. Nothing works. It makes an exe for me, but it always says module google not found. I also tried it with VERY simple code, using only the google library, but same issue:

import time

import google.generativeai as genai
genai.configure(api_key="///")
gemini_model = genai.GenerativeModel('gemini-pro')
prompt = input()
completion = gemini_model.generate_content(
prompt,
generation_config=genai.types.GenerationConfig(

        temperature=0.3)
)

response = completion.text
print(response)

I use the pyinstaller --onefile main.py
I installed the library and it successfully works in pycharm as a .py
Here is the error:
photo_2023-12-23_16-34-54

Actual vs expected behavior:

It should be included. I also Tried to inculde the library extra, but it just doesnt work. Tried it also on different machines, py versions ans so on...

Any other information you'd like to share?

Using Python 3.11.2

Readme for new Gemini Code Samples on Recipie to be updated to accept Image

Description of the bug:

The current example produces a base 64 response instead of the actual recipe of the item.

Actual vs expected behavior:

image

Any other information you'd like to share?

This code works correctly

`import google.generativeai as genai
from PIL import Image

genai.configure(api_key="[API_KEY")

model = genai.GenerativeModel('gemini-pro-vision')
cookie_picture = Image.open('image.png')
prompt = "Give me a recipe for this:"

response = model.generate_content(
contents=[prompt, cookie_picture]
)

`print(response.text)``

When using Python 3.11.4 to invoke the Gemini Pro Vision model and upload a plt image, a type error is reported.

Description of the bug:

! curl -o image.jpg https://t0.gstatic.com/licensed-image?q=tbn:ANd9GcQ_Kevbk21QBRy-PgB4kQpS79brbmmEG7m3VOTShAn4PecDU5H5UxrJxE3Dw1JiaG17V88QIol19-3TM2wCHw

img = PIL.Image.open('image.jpg')
model = genai.GenerativeModel('gemini-pro-vision')
response = model.generate_content(img)


TypeError Traceback (most recent call last)
Cell In[56], line 1
----> 1 response = model.generate_content(img)

File ~/.local/lib/python3.11/site-packages/google/generativeai/generative_models.py:229, in GenerativeModel.generate_content(self, contents, generation_config, safety_settings, stream, **kwargs)
219 @string_utils.set_doc(_GENERATE_CONTENT_DOC)
220 def generate_content(
221 self,
(...)
227 **kwargs,
228 ) -> generation_types.GenerateContentResponse:
--> 229 request = self._prepare_request(
230 contents=contents,
231 generation_config=generation_config,
232 safety_settings=safety_settings,
233 **kwargs,
234 )
235 if self._client is None:
236 self._client = client.get_default_generative_client()

File ~/.local/lib/python3.11/site-packages/google/generativeai/generative_models.py:200, in GenerativeModel._prepare_request(self, contents, generation_config, safety_settings, **kwargs)
197 if not contents:
198 raise TypeError("contents must not be empty")
--> 200 contents = content_types.to_contents(contents)
202 generation_config = generation_types.to_generation_config_dict(generation_config)
203 merged_gc = self._generation_config.copy()

File ~/.local/lib/python3.11/site-packages/google/generativeai/types/content_types.py:235, in to_contents(contents)
230 except TypeError:
231 # If you get a TypeError here it's probably because that was a list
232 # of parts, not a list of contents, so fall back to to_content.
233 pass
--> 235 contents = [to_content(contents)]
236 return contents

File ~/.local/lib/python3.11/site-packages/google/generativeai/types/content_types.py:201, in to_content(content)
198 return glm.Content(parts=[to_part(part) for part in content])
199 else:
200 # Maybe this is a Part?
--> 201 return glm.Content(parts=[to_part(content)])

File ~/.local/lib/python3.11/site-packages/google/generativeai/types/content_types.py:171, in to_part(part)
168 return glm.Part(text=part)
169 else:
170 # Maybe it can be turned into a blob?
--> 171 return glm.Part(inline_data=to_blob(part))

File ~/.local/lib/python3.11/site-packages/google/generativeai/types/content_types.py:140, in to_blob(blob)
136 if isinstance(blob, Mapping):
137 raise KeyError(
138 "Could not recognize the intended type of the dict\n" "A content should have "
139 )
--> 140 raise TypeError(
141 "Could not create Blob, expected Blob, dict or an Image type"
142 "(PIL.Image.Image or IPython.display.Image).\n"
143 f"Got a: {type(blob)}\n"
144 f"Value: {blob}"
145 )

TypeError: Could not create Blob, expected Blob, dict or an Image type(PIL.Image.Image or IPython.display.Image).
Got a: <class 'PIL.JpegImagePlugin.JpegImageFile'>
Value: <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2048x1365 at 0x7F89DDFC8B10>

Actual vs expected behavior:

No response

Any other information you'd like to share?

python 3.11.4
centos 7.9
google-ai-generativelanguage 0.4.0
google-api-core 2.15.0
google-auth 2.25.2
google-generativeai 0.3.1
googleapis-common-protos 1.62.0
langchain-google-genai 0.0.4

Unexpected type of call %s" % type(call) when do async chat call `send_message_async`

Description of the bug:

Unexpected type of call %s" % type(call) when do async chat call send_message_async.
I use the rest as transport, use my own proxy api endpoint.

Here is the code:

import asyncio
import os
import traceback

import google.generativeai as genai
from dotenv import load_dotenv

load_dotenv()

GEMINI_API_KEY = os.getenv('GEMINI_API_KEY')
GEMINI_API_ENDPOINT = os.getenv('GEMINI_API_ENDPOINT')


async def main():
    try:
        genai.configure(
            api_key=GEMINI_API_KEY,
            transport="rest",
            client_options={"api_endpoint": GEMINI_API_ENDPOINT}
        )

        model = genai.GenerativeModel('gemini-pro')

        chat = model.start_chat()

        # response = chat.send_message("Use python to write a fib func", stream=True)	# line a. This line works
        response = await chat.send_message_async("Use python to write a fib func", stream=True) # line b. This line will cause an error
        for chunk in response:
            print('=' * 80)
            print(chunk.text)
    except Exception as e:
        traceback.print_exc()


def run_main():
    loop = asyncio.new_event_loop()
    asyncio.set_event_loop(loop)
    try:
        loop.run_until_complete(main())
    except Exception as e:
        print(e)
    finally:
        loop.close()


run_main()

And after run this code, the error:

D:\xxx\venv\Scripts\python.exe D:\xxx\chore\gemini\gemini_test.py 
Traceback (most recent call last):
  File "D:\xxx\chore\gemini\gemini_test.py", line 27, in main
    response = await chat.send_message_async("Use python to write a fib func", stream=True) # This line will cause an error
  File "D:\xxx\venv\lib\site-packages\google\generativeai\generative_models.py", line 410, in send_message_async
    response = await self.model.generate_content_async(
  File "D:\xxx\venv\lib\site-packages\google\generativeai\generative_models.py", line 272, in generate_content_async
    iterator = await self._async_client.stream_generate_content(request)
  File "D:\xxx\venv\lib\site-packages\google\api_core\retry_async.py", line 223, in retry_wrapped_func
    return await retry_target(
  File "D:\xxx\venv\lib\site-packages\google\api_core\retry_async.py", line 121, in retry_target
    return await asyncio.wait_for(
  File "C:\xxx\Python\Python310\lib\asyncio\tasks.py", line 445, in wait_for
    return fut.result()
  File "D:\xxx\venv\lib\site-packages\google\api_core\grpc_helpers_async.py", line 187, in error_remapped_callable
    raise TypeError("Unexpected type of call %s" % type(call))
TypeError: Unexpected type of call <class 'google.api_core.rest_streaming.ResponseIterator'>

Process finished with exit code 0

Actual vs expected behavior:

It should work as the same as method send_message in line a

D:\xxx\venv\Scripts\python.exe D:\xxx\chore\gemini\gemini_test.py 
================================================================================
```python
def fib(n):
  """Calculates the nth Fibonacci
================================================================================
 number.

  Args:
    n: The index of the Fibonacci number to calculate.

  Returns:
    The nth Fibonacci number.
  
================================================================================
"""

  if n < 2:
    return n
  else:
    return fib(n-1) + fib(n-2)

Process finished with exit code 0


### Any other information you'd like to share?
python: 3.10.5
os: windows 11

_No response_

PermissionDenied trying to call chat() or any other service method

Getting the following error trying to execute this code.
NOTE: using the lower level API by calling aiplatform.gapic.PredictionServiceClient() works fine...

import google.generativeai as genai
import os

genai.configure(api_key=os.environ['GOOGLE_API_KEY'])

response = genai.chat(messages=["Hello."])

Exception has occurred: PermissionDenied (note: full exception trace is shown but execution is paused at: _run_module_as_main)
403 Generative Language API has not been used in project 411082643095 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/generativelanguage.googleapis.com/overview?project=411082643095 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. [links {
description: "Google developers console API activation"
url: "https://console.developers.google.com/apis/api/generativelanguage.googleapis.com/overview?project=411082643095"
}
, reason: "SERVICE_DISABLED"
domain: "googleapis.com"
metadata {
key: "consumer"
value: "projects/411082643095"
}
metadata {
key: "service"
value: "generativelanguage.googleapis.com"
}
]

Response in other language always return None

I'm trying to get a response in other languages, but it always return None

message = "HELLO!"
prompt = f"""
ANSWER IN CHINESE.
USER:{message}
ASSISTANT:"""

response = palm.chat(
    prompt=prompt
)

print(response.messages) 

Chinese
[{'author': '0', 'content': '\nANSWER IN CHINESE.\nUSER:HELLO!\nASSISTANT:'}, None]
Thai
[{'author': '0', 'content': '\nANSWER IN THAI.\nUSER:HELLO!\nASSISTANT:'}, None]
Japanese
[{'author': '0', 'content': '\nANSWER IN JAPANESE.\nUSER:HELLO!\nASSISTANT:'}, None]

google.api_core.exceptions.InternalServerError: 500 An internal error has occurred.

Description of the bug:

The error occurs when "res = geminiClient.generate_content(conversation).text" is reached again in a recursive function call.
I expected it to work the same way it did over the function's first pass. How do i fix this?

Error/Bug Found:

Traceback (most recent call last):
  File "C:\Users\eworo\DELTA\env\Lib\site-packages\google\api_core\grpc_helpers.py", line 79, in error_remapped_callable
    return callable_(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eworo\DELTA\env\Lib\site-packages\grpc\_channel.py", line 1160, in __call__
    return _end_unary_response_blocking(state, call, False, None)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eworo\DELTA\env\Lib\site-packages\grpc\_channel.py", line 1003, in _end_unary_response_blocking
    raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
        status = StatusCode.INTERNAL
        details = "An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting"
        debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.115.95:443 {created_time:"2023-12-15T04:18:34.1915935+00:00", grpc_status:13, grpc_message:"An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting"}"
>

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "c:\Users\eworo\DELTA\src\test.py", line 48, in <module>
    infer(input("Q: "))
  File "c:\Users\eworo\DELTA\src\test.py", line 45, in infer
    infer("")
  File "c:\Users\eworo\DELTA\src\test.py", line 40, in infer
    res = geminiClient.generate_content(conversation).text
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eworo\DELTA\env\Lib\site-packages\google\generativeai\generative_models.py", line 243, in generate_content
    response = self._client.generate_content(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eworo\DELTA\env\Lib\site-packages\google\ai\generativelanguage_v1beta\services\generative_service\client.py", line 566, in generate_content
    response = rpc(
               ^^^^
  File "C:\Users\eworo\DELTA\env\Lib\site-packages\google\api_core\gapic_v1\method.py", line 131, in __call__
    return wrapped_func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eworo\DELTA\env\Lib\site-packages\google\api_core\retry.py", line 372, in retry_wrapped_func
    return retry_target(
           ^^^^^^^^^^^^^
  File "C:\Users\eworo\DELTA\env\Lib\site-packages\google\api_core\retry.py", line 207, in retry_target
    result = target()
             ^^^^^^^^
  File "C:\Users\eworo\DELTA\env\Lib\site-packages\google\api_core\timeout.py", line 120, in func_with_timeout
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eworo\DELTA\env\Lib\site-packages\google\api_core\grpc_helpers.py", line 81, in error_remapped_callable
    raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.InternalServerError: 500 An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting

Any other information you'd like to share?

I was using the Gemini API.

Here is code to recreate the error:

import google.generativeai as genai

genai.configure(api_key="API_KEY")

geminiClient = genai.GenerativeModel(model_name="gemini-pro")

conversation = [{'role':'user','parts': [f"Hello!"]}, {'role':'model','parts': ["Hey there Anon!"]}]


def infer(query):
    conversation.append({'role':'user','parts': [query]})
    res = geminiClient.generate_content(conversation).text
    print("A: " + res)
    conversation.append({'role':'model','parts': [res]})

    if res == "Y":
        infer("")

while True:
    infer(input("Q: "))

To attain the error, tell the model "only respond to me with "Y" until i say stop".

InternalServerError in `ChatSession.send_message` with Empty String

Description of the bug:

When passing empty message to ChatSession.send_message it triggers a 500 Internal Server Error.

Here is an example :

import google.generativeai as genai

genai.configure(api_key ="YOUR API KEY")

model = genai.GenerativeModel(model_name='gemini-pro')

chat = model.start_chat(history=[])
response = chat.send_message("")

Actual vs expected behavior:

Actual behavior:

InternalServerError: 500 POST https://dp.kaggle.net/palmapi/v1beta/models/gemini-pro:generateContent?%24alt=json%3Benum-encoding%3Dint: An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting

Actual behavior:
Should be handled in the same way as it is in GenerativeModel.generate_content :

TypeError: contents must not be empty

Any other information you'd like to share?

No response

IndexError: list index out of range

I am encountering an issue while using the google.generativeai.chat library. Whenever I input the phrase "who are you?" as a message, it consistently triggers an error. This behavior seems unexpected and needs to be addressed.

response = palm.chat(messages=["who are you?"])
# error
  ...
  File "/Users/jybaek/work/xxxx/.venv/lib/python3.8/site-packages/google/generativeai/discuss.py", line 413, in reply
    return _generate_response(request=request, client=self._client)
  File "/Users/jybaek/work/xxxx/.venv/lib/python3.8/site-packages/google/generativeai/discuss.py", line 461, in _generate_response
    return _build_chat_response(request, response, client)
  File "/Users/jybaek/work/xxxx/.venv/lib/python3.8/site-packages/google/generativeai/discuss.py", line 443, in _build_chat_response
    request["messages"].append(response["candidates"][0])
IndexError: list index out of range

Is it a known issue? Please check.

Palm API - 403 Request had insufficient authentication scopes

This is similar to issue #50. Running the following:
import google.generativeai as palm
import os
palm.configure(api_key="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

response = palm.chat(messages='Hello')

response.last

and receive:
PermissionDenied: 403 Request had insufficient authentication scopes. [reason: "ACCESS_TOKEN_SCOPE_INSUFFICIENT"
domain: "googleapis.com"
metadata {
key: "method"
value: "google.ai.generativelanguage.v1beta2.DiscussService.GenerateMessage"
}
metadata {
key: "service"
value: "generativelanguage.googleapis.com"
}
I get successful results running the curl example.
I have run it under 2 Python environments
I'm a fairly new user but it seems to me that it can't find "google.ai.generativelanguage.v1beta2.DiscussService.GenerateMessage"

I was approved for the beta test. Tried to add the scope: https://www.googleapis.com/auth/generative-language but it could not be found.
Don't know what more steps I can take

Deploy as conda library

Description of the feature request:

Hi

Any chance of deploying this as a conda library ?

What problem are you trying to solve with this feature?

No response

Any other information you'd like to share?

No response

“1‘ is blocked by safety reason? seriously

Description of the bug:

when i send 1 to the gemini pro, it raise a exception :
ValueError: The response.parts quick accessor only works for a single candidate, but none were returned. Check the response.prompt_feedback to see if the prompt was blocked.

then i print the response.prommpt_feedback, i got this:

block_reason: SAFETY
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: LOW
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: MEDIUM
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: NEGLIGIBLE
}
them i go to studio :

image

Actual vs expected behavior:

so anybody tell what happend?
No response

Any other information you'd like to share?

No response

google-generativeai Installation on Render Linux Servers

Description of the bug:

ERROR: No matching distribution found for google-generativeai

Actual vs expected behavior:

No response

Any other information you'd like to share?

I am trying to install it from the requirements.txt file on render.com

JSON Mode

Description of the feature request:

Didn't the announcement video specifically mention about json mode? I'm unable to find a flag for it in the API. and asking to gemini-pro to send a JSON output returns invalid json data

What problem are you trying to solve with this feature?

JSON parsable output for building apps

Any other information you'd like to share?

No response

Async support

Description of the feature request:

Since we are calling an external API, adding async support will help improve latency in applications.

What problem are you trying to solve with this feature?

Calling embeddings and text generation asynchronously

Any other information you'd like to share?

No response

Don't use ADCs if an API key is specified

I have been trying to use the Palm API and the palm.chat() function with google's new generative api. I've been in a maze of documentation and errors and I can't seem to get past this one. My code is very simple, and the error is coming from a simple request with palm.chat(). I have an API key that works when I test it with curl. I also downloaded credentials. I set up an OAuth consent screen, because I thought that might help me add the scope that I need, but I can't see what the scope requirement would be for palm.chat. Here is my code:

import google.generativeai as palm
import os
palm.configure(api_key='XXXXXXXXXXXXXXXXXXXXX')

os.environ['GOOGLE_APPLICATION_CREDENTIALS']='XXXXXXXXX/.config/gcloud/application_default_credentials.json'

response = palm.chat(messages='Hello')

response.last

The exact error I am getting is:

File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/google/api_core/grpc_helpers.py", line 67, in error_remapped_callable raise exceptions.from_grpc_error(exc) from exc google.api_core.exceptions.PermissionDenied: 403 Request had insufficient authentication scopes. [reason: "ACCESS_TOKEN_SCOPE_INSUFFICIENT" domain: "googleapis.com" metadata { key: "method" value: "google.ai.generativelanguage.v1beta2.TextService.GenerateText" } metadata { key: "service" value: "generativelanguage.googleapis.com" }

I think the problem is that I need to add some kind of scope to oauth but there is no documentation anywhere that I can find that says what that might be. I've posted this on google and stack overflow but no one has had a solution, so any help at all would be greatly appreciated. thank you so much!

Safety filter is too strong today

Description of the bug:

Hello, I am using the PaLM API through Python / JS and MakerSuite. I noticed that the safety filter has become unexpectedly strong recently. My very benign inference requests were all responded with {reason: "OTHER"}. Is there a way to change my safety setting to get the result?

Here is an example in MakerSuite (I have already lowered the safety filter thresholds, see screenshot 2):

image image

Actual vs expected behavior:

I expect to get result "comment vas-tu ?" instead of an empty response with OTHER reason. I believe PaLM 2 API used to give me this correct response two days ago.

Any other information you'd like to share?

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.