Giter Club home page Giter Club logo

Comments (16)

krrishdholakia avatar krrishdholakia commented on July 23, 2024 1

closing as unable to repro but bump us to reopen if this isn't a suspected version issue

i'll try and add the langfuse version id logging today @Manouchehri

from litellm.

krrishdholakia avatar krrishdholakia commented on July 23, 2024

i don't see the user field in this call

what would you expect to happen here? @Manouchehri

from litellm.

Manouchehri avatar Manouchehri commented on July 23, 2024

It should be pulled from the key alias. I think it's more than just user that is racy. See the screenshot below, these are identical requests, but the name is changing at random sometimes.

image

from litellm.

krrishdholakia avatar krrishdholakia commented on July 23, 2024
Screenshot 2024-06-01 at 10 08 21 AM

i can see the line of code in the server, the only this would happen is if:

  • user_id is being passed by the request (seems unlikely)
  • user_id isn't being passed by user_api_key_auth (possible)

from litellm.

krrishdholakia avatar krrishdholakia commented on July 23, 2024

do you have a consistent repro of this? @Manouchehri

from litellm.

Manouchehri avatar Manouchehri commented on July 23, 2024

do you have a consistent repro of this? @Manouchehri

Nope. Super confused why sometimes the user_api_key_* fields are missing, I don't see any pattern to it. Here's two identical requests I sent:

Good request shows this:

image

Bad request shows this:

image

from litellm.

ishaan-jaff avatar ishaan-jaff commented on July 23, 2024

Are you on cloud run with 2 instances? It just looks like one instance has an older version

from litellm.

Manouchehri avatar Manouchehri commented on July 23, 2024

I'm 98% sure all instances are up to date. Can't confirm retroactively though cause of #3673. 😅

from litellm.

ishaan-jaff avatar ishaan-jaff commented on July 23, 2024

can you restart / re-deploy on cloud run and see if the issue persists ?

I'm unable to repro your problem locally

from litellm.

Manouchehri avatar Manouchehri commented on July 23, 2024

Sorry, I confirmed that I am indeed only running 1 instances. I'm able to reproduce it with this loop:

for i in {1..50} ; do curl -v "${OPENAI_API_BASE}/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-3.5-turbo-0125",
    "max_tokens": 10,
    "seed": 31337,
    "messages": [
      {
        "role": "user",
        "content": "what is 1 plus 1?"
      }
    ],
    "cache": {
      "no-cache": true
    },
    "extra_headers": {
      "cf-skip-cache": "True"
    }
  }' ; done

The first request works, not the following ones though.

image

from litellm.

Manouchehri avatar Manouchehri commented on July 23, 2024

I am using 1.39.6 100% everywhere, the Cloud Run instances don't stick around idle more than 15 minutes worst case. So that should never be an issue for me. =)

from litellm.

krrishdholakia avatar krrishdholakia commented on July 23, 2024

oh! that's interesting - thanks for this @Manouchehri

from litellm.

Manouchehri avatar Manouchehri commented on July 23, 2024

Seems like a API key caching issue of some sort?

for i in {1..10} ; do curl -v "${OPENAI_API_BASE}/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gemini-1.5-flash-001",
    "max_tokens": 10,
    "messages": [
      {
        "role": "user",
        "content": "what is 1 plus 1?"
      }
    ],
    "cache": {
      "no-cache": true
    }
  }' ; done

This is not because of the OIDC caching I've added in the recent PRs. In the test case above, you can see I'm using gemini-1.5-flash-001 on Vertex AI, which I haven't worked on at all for that feature. :)

image

from litellm.

Manouchehri avatar Manouchehri commented on July 23, 2024

Are you caching anything for about a minute? It kinda looks like that's related to this issue. If I sleep for 61 seconds between requests, it works perfectly.

Repro that doesn't trigger the bug:

for i in {1..10} ; do sleep 61 && curl -v "${OPENAI_API_BASE}/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gemini-1.5-flash-001",
    "max_tokens": 10,
    "messages": [
      {
        "role": "user",
        "content": "what is 1 plus 1?"
      }
    ],
    "cache": {
      "no-cache": true
    }
  }' ; done
image

Repro that does trigger the bug:

for i in {1..10} ; do sleep 55 && curl -v "${OPENAI_API_BASE}/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gemini-1.5-flash-001",
    "max_tokens": 10,
    "messages": [
      {
        "role": "user",
        "content": "what is 1 plus 1?"
      }
    ],
    "cache": {
      "no-cache": true
    }
  }' ; done

See below how everything beyond the first request is broken?

image

from litellm.

matthiaslau avatar matthiaslau commented on July 23, 2024

I am experiencing the same issue and it seems this is really a caching bug.

When the hashed api token does not exist as key in the cache, an empty LiteLLM_VerificationTokenView object is created with only the token (

existing_spend_obj = LiteLLM_VerificationTokenView(token=token)
) and this is set to the cache (
user_api_key_cache.set_cache(key=hashed_token, value=existing_spend_obj)
).

Now it seems this cache entry with the api token key is used somewhere else when loading the api key object, not 100% sure yet where this happens exactly. This object from the cache now only contains the token and api_key, missing all other data:

token='d514bb211d49d8c135f13c8bd9d4022b37e3a4a18c659b09892b1c62959e1088' key_name=None key_alias=None spend=0.00114 max_budget=None expires=None models=[] aliases={} config={} user_id=None team_id=None max_parallel_requests=None metadata={} tpm_limit=None rpm_limit=None budget_duration=None budget_reset_at=None allowed_cache_controls=[] permissions={} model_spend={} model_max_budget={} soft_budget_cooldown=False litellm_budget_table=None org_id=None team_spend=None team_alias=None team_tpm_limit=None team_rpm_limit=None team_max_budget=None team_models=[] team_blocked=False soft_budget=None team_model_aliases=None team_member_spend=None end_user_id=None end_user_tpm_limit=None end_user_rpm_limit=None end_user_max_budget=None api_key='8bb0465eb6a148ba9d969368fe55a2c23bdb63d937c61c5fa4749d95490eded3' user_role=<LitellmUserRoles.INTERNAL_USER: 'internal_user'> allowed_model_region=None

After 60s (default in_memory_cache_ttl) this cache entry is invalid and it works again for one single request.

I will continue debugging, perhaps @krrishdholakia has an idea if this is the right direction and where the cache is used as api key and how to fix this?

from litellm.

matthiaslau avatar matthiaslau commented on July 23, 2024

I think the cache retrieval happens here:

valid_token: Optional[UserAPIKeyAuth] = user_api_key_cache.get_cache( # type: ignore

The hashed token is used as a cache key as well and now the full api key object is expected, but the _update_key_cache function set the incomplete object to the cache.

from litellm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.