Hello,
I deployed the template (and fix the contentsafety endpoint in AppService Configuration) and I'm facing the following error
This issue is for a: (mark with an x
)
- [X] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
Minimal steps to reproduce
Just initiate a conversation
Any log messages given by the failure
See below
2023-10-22T21:32:51.077875015Z: [ERROR] [pid: 1|app: 0|req: 18/18] 169.254.131.1 () {84 vars in 1953 bytes} [Sun Oct 22 21:32:51 2023] GET / => generated 0 bytes in 3 msecs (HTTP/1.1 304) 4 headers in 178 bytes (0 switches on core 0)
2023-10-22T21:32:51.180146360Z: [ERROR] [pid: 1|app: 0|req: 19/19] 169.254.131.1 () {84 vars in 2006 bytes} [Sun Oct 22 21:32:51 2023] GET /assets/index-0aaa15a4.js => generated 0 bytes in 3 msecs (HTTP/1.1 304) 4 headers in 188 bytes (0 switches on core 0)
2023-10-22T21:32:51.200854687Z: [ERROR] [pid: 1|app: 0|req: 20/20] 169.254.131.1 () {82 vars in 1961 bytes} [Sun Oct 22 21:32:51 2023] GET /assets/index-09fe54a5.css => generated 0 bytes in 1 msecs (HTTP/1.1 304) 4 headers in 187 bytes (0 switches on core 0)
2023-10-22T21:32:51.303081733Z: [ERROR] [pid: 1|app: 0|req: 21/21] 169.254.131.1 () {82 vars in 1996 bytes} [Sun Oct 22 21:32:51 2023] GET /assets/Azure-30d5e7c0.svg => generated 0 bytes in 1 msecs (HTTP/1.1 304) 4 headers in 187 bytes (0 switches on core 0)
2023-10-22T21:32:51.523325493Z: [ERROR] [pid: 1|app: 0|req: 22/22] 169.254.131.1 () {82 vars in 1940 bytes} [Sun Oct 22 21:32:51 2023] GET /favicon.ico => generated 0 bytes in 1 msecs (HTTP/1.1 304) 4 headers in 180 bytes (0 switches on core 0)
2023-10-22T21:32:53.600357142Z: [ERROR] [pid: 1|app: 0|req: 23/23] 169.254.131.1 () {82 vars in 1991 bytes} [Sun Oct 22 21:32:53 2023] GET /assets/Send-d0601aaa.svg => generated 0 bytes in 6 msecs (HTTP/1.1 304) 4 headers in 185 bytes (0 switches on core 0)
2023-10-22T21:32:54.052951162Z: [ERROR] Returning default config
2023-10-22T21:32:54.150731345Z: [ERROR] Returning default config
2023-10-22T21:32:54.152476530Z: [ERROR] New message id: 8e3dce5a-f483-4b34-99f6-6e737baab98c with tokens {'prompt': 0, 'completion': 0, 'total': 0}
2023-10-22T21:32:54.486134144Z: [ERROR] ERROR:root:Exception in /api/conversation/custom
2023-10-22T21:32:54.486178444Z: [ERROR] Traceback (most recent call last):
2023-10-22T21:32:54.486186044Z: [ERROR] File "app.py", line 271, in conversation_custom
2023-10-22T21:32:54.486190444Z: [ERROR] messages = message_orchestrator.handle_message(user_message=user_message, chat_history=chat_history, conversation_id=conversation_id, orchestrator=ConfigHelper.get_active_config_or_default().orchestrator)
2023-10-22T21:32:54.486224543Z: [ERROR] File "/usr/src/app/./backend/utilities/helpers/OrchestratorHelper.py", line 13, in handle_message
2023-10-22T21:32:54.486229443Z: [ERROR] return orchestrator.handle_message(user_message, chat_history, conversation_id)
2023-10-22T21:32:54.486233243Z: [ERROR] File "/usr/src/app/./backend/utilities/orchestrator/OrchestratorBase.py", line 33, in handle_message
2023-10-22T21:32:54.486237143Z: [ERROR] result = self.orchestrate(user_message, chat_history, **kwargs)
2023-10-22T21:32:54.486240743Z: [ERROR] File "/usr/src/app/./backend/utilities/orchestrator/OpenAIFunctions.py", line 78, in orchestrate
2023-10-22T21:32:54.486244743Z: [ERROR] result = llm_helper.get_chat_completion_with_functions(messages, self.functions, function_call="auto")
2023-10-22T21:32:54.486248543Z: [ERROR] File "/usr/src/app/./backend/utilities/helpers/LLMHelper.py", line 34, in get_chat_completion_with_functions
2023-10-22T21:32:54.486252443Z: [ERROR] return openai.ChatCompletion.create(
2023-10-22T21:32:54.486256043Z: [ERROR] File "/usr/local/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 25, in create
2023-10-22T21:32:54.486259943Z: [ERROR] return super().create(*args, **kwargs)
2023-10-22T21:32:54.486263543Z: [ERROR] File "/usr/local/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
2023-10-22T21:32:54.486267443Z: [ERROR] response, _, api_key = requestor.request(
2023-10-22T21:32:54.486271043Z: [ERROR] File "/usr/local/lib/python3.9/site-packages/openai/api_requestor.py", line 298, in request
2023-10-22T21:32:54.486274843Z: [ERROR] resp, got_stream = self._interpret_response(result, stream)
2023-10-22T21:32:54.486278743Z: [ERROR] File "/usr/local/lib/python3.9/site-packages/openai/api_requestor.py", line 700, in _interpret_response
2023-10-22T21:32:54.486282443Z: [ERROR] self._interpret_response_line(
2023-10-22T21:32:54.486286643Z: [ERROR] File "/usr/local/lib/python3.9/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
2023-10-22T21:32:54.486290843Z: [ERROR] raise self.handle_error_response(
2023-10-22T21:32:54.486308143Z: [ERROR] openai.error.InvalidRequestError: Unrecognized request argument supplied: functions
2023-10-22T21:32:54.494254276Z: [ERROR] [pid: 1|app: 0|req: 24/24] 169.254.131.1 () {84 vars in 1946 bytes} [Sun Oct 22 21:32:53 2023] POST /api/conversation/custom => generated 62 bytes in 511 msecs (HTTP/1.1 500) 2 headers in 90 bytes (1 switches on core 0)
Any idea how to fix that?