Comments (12)
Thanks for your report. Are you able to share the specific prompt which is causing this issue?
from generative-ai-python.
I'm not OP but this is trivial to replicate: Just go to Google AI Studio, turn the safety settings down to "low" and use the following a harmless prompt like this: I said, "let's watch 'Sex and the City'"
You'll trigger the "Sexually Explicit" filter. The bar is so ridiculously low as to make creative writing virtually impossible.
from generative-ai-python.
I am getting an following safety error for what prompt "what is 1+1"
`(/home/wolverine/llama2/venv) wolverine@wolverine-GP66-Leopard-11UG:~/llama2/tchat$ python main.py
what is 1+1
Traceback (most recent call last):
File "/home/wolverine/llama2/tchat/main.py", line 27, in
result = chain({"content": content})
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 312, in call
raise e
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 306, in call
self._call(inputs, run_manager=run_manager)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 115, in generate
return self.llm.generate_prompt(
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 543, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate
raise e
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate
self._generate_with_cache(
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 576, in _generate_with_cache
return self._generate(
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_google_genai/chat_models.py", line 550, in _generate
response: genai.types.GenerateContentResponse = _chat_with_retry(
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_google_genai/chat_models.py", line 140, in _chat_with_retry
return _chat_with_retry(**kwargs)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/tenacity/init.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/tenacity/init.py", line 379, in call
do = self.iter(retry_state=retry_state)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/tenacity/init.py", line 314, in iter
return fut.result()
File "/home/wolverine/llama2/venv/lib/python3.9/concurrent/futures/_base.py", line 433, in result
return self.__get_result()
File "/home/wolverine/llama2/venv/lib/python3.9/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/tenacity/init.py", line 382, in call
result = fn(*args, **kwargs)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_google_genai/chat_models.py", line 138, in _chat_with_retry
raise e
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_google_genai/chat_models.py", line 131, in _chat_with_retry
return generation_method(**kwargs)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/google/generativeai/generative_models.py", line 384, in send_message
raise generation_types.StopCandidateException(response.candidates[0])
google.generativeai.types.generation_types.StopCandidateException: index: 0
content {
parts {
text: "2"
}
role: "model"
}
finish_reason: SAFETY
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: LOW
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: MEDIUM
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: NEGLIGIBLE
}`
from generative-ai-python.
Looks like the minimum profanity thresold users are allowed to configure is BLOCK_ONLY_HIGH
You can check it by reducing the safety settings in the right side and lowering the safety to minimum and clicking on the get code option.
from generative-ai-python.
Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.
from generative-ai-python.
This issue was closed because it has been inactive for 28 days. Please post a new issue if you need further assistance. Thanks!
from generative-ai-python.
i also face this issue can anyone help to handle this
from generative-ai-python.
i have created a chatbot using gemini it working good but when i ask question like what is 1+1
it is giving error
2024-03-18 18:57:23.751 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script
exec(code, module.dict)
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\app.py", line 144, in
model_response = get_response(user_input, st.session_state['API_Key'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\app.py", line 128, in get_response
response = st.session_state['conversation'].predict(input=user_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\llm.py", line 293, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_core_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\base.py", line 378, in call
return self.invoke(
^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\base.py", line 163, in invoke
raise e
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\llm.py", line 115, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 571, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 434, in generate
raise e
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 424, in generate
self._generate_with_cache(
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 608, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_google_genai\chat_models.py", line 555, in _generate
response: genai.types.GenerateContentResponse = chat_with_retry(
^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_google_genai\chat_models.py", line 152, in chat_with_retry
return chat_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\tenacity_init.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\tenacity_init.py", line 379, in call
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\tenacity_init.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "C:\Users\CA\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 401, in __get_result
raise self.exception
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\tenacity_init.py", line 382, in call
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_google_genai\chat_models.py", line 150, in _chat_with_retry
raise e
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_google_genai\chat_models.py", line 134, in _chat_with_retry
return generation_method(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\google\generativeai\generative_models.py", line 434, in send_message
self._check_response(response=response, stream=stream)
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\google\generativeai\generative_models.py", line 461, in _check_response
raise generation_types.StopCandidateException(response.candidates[0])
google.generativeai.types.generation_types.StopCandidateException: index: 0
finish_reason: SAFETY
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: LOW
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: MEDIUM
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: NEGLIGIBLE
}
why it is giving this error
from generative-ai-python.
has this issue been solved or what was you work around
from generative-ai-python.
I gave up on Google a long time ago, but I was just reading on Reddit today that other people still can't get this LLM to do anything - someone said it refused to give them code to delete some files, LOL. I use a bunch of other LLMs instead, including Llama 3 on Groq and Claude.
from generative-ai-python.
This is really bizarre. Iām using Gemini-1.5-pro-preview-0409 with Vertex AI to check the Raft Research paper PDF, and it throws the error mentioned above.
from generative-ai-python.
Like the what the other guy said , i gave up on google, try using another model like llama3 on groq @h777arsh
from generative-ai-python.
Related Issues (20)
- Security Policy violation SECURITY.md HOT 11
- `upload_file` returns a partially populated object HOT 1
- Gemini version 0.5.* generate_content doesn't provide usageMetadata if stream=True HOT 1
- response.text quick accessor bug HOT 1
- Uri parameter cannot be overridden when sending via rest transport HOT 2
- Clarify google-ai-generativelanguage version HOT 5
- pip install --upgrade google.generativeai => Mac OS malware warning re rustc HOT 4
- File uploads are broken (ValueError: Unknown field for File: state) HOT 2
- `count_tokens` should allow empty content HOT 2
- Is upload_file thread safe? HOT 4
- System Instructions for Gemini 1.0? HOT 5
- stream=True returns all chunks at the same time [in Colab] HOT 1
- A plain text version of gemini 1.5 HOT 2
- Support for Video (MP4) with Gemini Pro 1.5 HOT 11
- Add client timeout setting to overcome timeout errors. HOT 1
- RECITATION error when transcribe audio to text HOT 1
- response_schema parameter is not followed. HOT 2
- google.api_core.exceptions.Unknown: None Stream removed
- Content has no parts
- stop_resaon is alwasy STOP, and usage_meta is consistently empty
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
š Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ššš
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ā¤ļø Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from generative-ai-python.