Giter Club home page Giter Club logo

Comments (12)

williamito avatar williamito commented on May 20, 2024

Thanks for your report. Are you able to share the specific prompt which is causing this issue?

from generative-ai-python.

mlamothe avatar mlamothe commented on May 20, 2024

I'm not OP but this is trivial to replicate: Just go to Google AI Studio, turn the safety settings down to "low" and use the following a harmless prompt like this: I said, "let's watch 'Sex and the City'"

You'll trigger the "Sexually Explicit" filter. The bar is so ridiculously low as to make creative writing virtually impossible.

from generative-ai-python.

udayzee05 avatar udayzee05 commented on May 20, 2024

I am getting an following safety error for what prompt "what is 1+1"
`(/home/wolverine/llama2/venv) wolverine@wolverine-GP66-Leopard-11UG:~/llama2/tchat$ python main.py

what is 1+1
Traceback (most recent call last):
File "/home/wolverine/llama2/tchat/main.py", line 27, in
result = chain({"content": content})
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 312, in call
raise e
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 306, in call
self._call(inputs, run_manager=run_manager)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 115, in generate
return self.llm.generate_prompt(
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 543, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate
raise e
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate
self._generate_with_cache(
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 576, in _generate_with_cache
return self._generate(
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_google_genai/chat_models.py", line 550, in _generate
response: genai.types.GenerateContentResponse = _chat_with_retry(
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_google_genai/chat_models.py", line 140, in _chat_with_retry
return _chat_with_retry(**kwargs)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/tenacity/init.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/tenacity/init.py", line 379, in call
do = self.iter(retry_state=retry_state)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/tenacity/init.py", line 314, in iter
return fut.result()
File "/home/wolverine/llama2/venv/lib/python3.9/concurrent/futures/_base.py", line 433, in result
return self.__get_result()
File "/home/wolverine/llama2/venv/lib/python3.9/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/tenacity/init.py", line 382, in call
result = fn(*args, **kwargs)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_google_genai/chat_models.py", line 138, in _chat_with_retry
raise e
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_google_genai/chat_models.py", line 131, in _chat_with_retry
return generation_method(**kwargs)
File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/google/generativeai/generative_models.py", line 384, in send_message
raise generation_types.StopCandidateException(response.candidates[0])
google.generativeai.types.generation_types.StopCandidateException: index: 0
content {
parts {
text: "2"
}
role: "model"
}
finish_reason: SAFETY
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: LOW
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: MEDIUM
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: NEGLIGIBLE
}`

from generative-ai-python.

rayanfer32 avatar rayanfer32 commented on May 20, 2024

Looks like the minimum profanity thresold users are allowed to configure is BLOCK_ONLY_HIGH

image

You can check it by reducing the safety settings in the right side and lowering the safety to minimum and clicking on the get code option.

from generative-ai-python.

github-actions avatar github-actions commented on May 20, 2024

Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.

from generative-ai-python.

github-actions avatar github-actions commented on May 20, 2024

This issue was closed because it has been inactive for 28 days. Please post a new issue if you need further assistance. Thanks!

from generative-ai-python.

aliasghar5124 avatar aliasghar5124 commented on May 20, 2024

i also face this issue can anyone help to handle this

from generative-ai-python.

ritvikforcebolt avatar ritvikforcebolt commented on May 20, 2024

i have created a chatbot using gemini it working good but when i ask question like what is 1+1

it is giving error

2024-03-18 18:57:23.751 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script
exec(code, module.dict)
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\app.py", line 144, in
model_response = get_response(user_input, st.session_state['API_Key'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\app.py", line 128, in get_response
response = st.session_state['conversation'].predict(input=user_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\llm.py", line 293, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_core_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\base.py", line 378, in call
return self.invoke(
^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\base.py", line 163, in invoke
raise e
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\llm.py", line 115, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 571, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 434, in generate
raise e
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 424, in generate
self._generate_with_cache(
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 608, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_google_genai\chat_models.py", line 555, in _generate
response: genai.types.GenerateContentResponse = chat_with_retry(
^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_google_genai\chat_models.py", line 152, in chat_with_retry
return chat_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\tenacity_init
.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\tenacity_init
.py", line 379, in call
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\tenacity_init
.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "C:\Users\CA\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 401, in __get_result
raise self.exception
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\tenacity_init
.py", line 382, in call
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_google_genai\chat_models.py", line 150, in _chat_with_retry
raise e
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_google_genai\chat_models.py", line 134, in _chat_with_retry
return generation_method(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\google\generativeai\generative_models.py", line 434, in send_message
self._check_response(response=response, stream=stream)
File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\google\generativeai\generative_models.py", line 461, in _check_response
raise generation_types.StopCandidateException(response.candidates[0])
google.generativeai.types.generation_types.StopCandidateException: index: 0
finish_reason: SAFETY
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: LOW
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: MEDIUM
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: NEGLIGIBLE
}

why it is giving this error

from generative-ai-python.

GIDDY269 avatar GIDDY269 commented on May 20, 2024

has this issue been solved or what was you work around

from generative-ai-python.

mlamothe avatar mlamothe commented on May 20, 2024

I gave up on Google a long time ago, but I was just reading on Reddit today that other people still can't get this LLM to do anything - someone said it refused to give them code to delete some files, LOL. I use a bunch of other LLMs instead, including Llama 3 on Groq and Claude.

from generative-ai-python.

h777arsh avatar h777arsh commented on May 20, 2024

This is really bizarre. Iā€™m using Gemini-1.5-pro-preview-0409 with Vertex AI to check the Raft Research paper PDF, and it throws the error mentioned above.

from generative-ai-python.

GIDDY269 avatar GIDDY269 commented on May 20, 2024

Like the what the other guy said , i gave up on google, try using another model like llama3 on groq @h777arsh

from generative-ai-python.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    šŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. šŸ“ŠšŸ“ˆšŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ā¤ļø Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.