Giter Club home page Giter Club logo

tanuki.py's People

Contributors

bmagz avatar cosmastech avatar dnlkwak avatar jackhopkins avatar jonesmabea avatar martbakler avatar michaelsel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

tanuki.py's Issues

Create a persistence layer abstraction

In many cases, writing to the disk is not possible or advised. There should be the option of other persistence layers that enable writing of data to S3 / Redis / Cloudflare KV etc. In order for this to be possible, a clean persistence interface must be created.

[Use-case] email cleaner for early access waitlist

  • Context: Company has "get early access" where prospective customers can put their email in
  • Problem: They get hundreds of thousands of requests and many of them are fake emails so cleaning them to build an outbound email campaign is time-consuming
  • Workflow:
    • Emails passed
    • Extract
      • Email
      • Company
      • Real: (Y/N)

[Use-case] Replacing regex for email-to-name matching

Replacing regex for email-to-name matching

  • Context: Company has an email list of thousands of emails and list of names in their database that aren't mapped
  • Problem: Can write a matching algorithm but there's dozens of various edge cases and covering all of them exhaustively would take too long
  • Workflow:

OpenAI FinetuneJob started with wrong type

Describe the bug
When getting OpenAI finetunes, the FinetuneJob is instantiated with a string as fine_tuned_model not a BaseModelConfig

To Reproduce
Steps to reproduce the behavior:
Start a OpenAI finetuning job and see the FinetuneJob

Expected behavior
FinetuneJob should have a BaseModelConfig as the fine_tuned_model

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

[Use-case] Twitter and Reddit Scraper for Buying Intent

  • Context: There's lots of social feedback on various channels on Twitter, Reddit, LNKD about dbt and their competitors. Lots of them are demonstrating buying intent or frustrations about a competitor.
  • Problem: Surfacing these insights via keywords is low quality and time consuming. There's existing scrapers and products but they're expensive and takes days to set up properly.
  • Workflow:

Add a top-level config parameter to disable telemetry.

Currently, we automatically track usage of MP to help understand how people are using it.

There is currently no way to turn this off. Users may want to disable telemetry for privacy reasons.

We should add a top-level global config parameter for users to opt-out if necessary. The syntax could look something like this:

Monkey.telemetry(disable=True)

OpenAI API failed to generate response

I am using the exact code provided in an example and I have an existing openAI API key that I have tested in another script calling openAI directly.

Please help me understand why it's providing a generic exception response "Exception: OpenAI API failed to generate a response"

`@monkey.patch
def score_sentiment(input: str) -> Annotated[int, Field(gt=0, lt=10)]:
"""Scores the input between 0-10"""

@monkey.align
def align_score_sentiment():
"""Register several examples to align your function"""
assert score_sentiment("I love you") == 10
assert score_sentiment("I hate you") == 0
assert score_sentiment("You're okay I guess") == 5

def test_score_sentiment():
"""We can test the function as normal using Pytest or Unittest"""
score = score_sentiment("I like you")
assert score >= 7

if name == "main":
align_score_sentiment()
print(score_sentiment("I like you")) # 7
print(score_sentiment("Apples might be red")) # `

Traceback (most recent call last): File "/Users/jacoblaws/Development/python/eleva/gpt_experiments/monkey_patch_descriptions.py", line 46, in <module> print(score_sentiment("I like you")) # 7 File "/Users/jacoblaws/opt/anaconda3/lib/python3.9/site-packages/monkey_patch/monkey.py", line 206, in wrapper output = Monkey.language_modeler.generate(args, kwargs, Monkey.function_modeler, function_description) File "/Users/jacoblaws/opt/anaconda3/lib/python3.9/site-packages/monkey_patch/language_models/language_modeler.py", line 31, in generate choice = self.synthesise_answer(prompt, model, model_type, llm_parameters) File "/Users/jacoblaws/opt/anaconda3/lib/python3.9/site-packages/monkey_patch/language_models/language_modeler.py", line 41, in synthesise_answer return self.api_models[model_type].generate(model, self.system_message, prompt, **llm_parameters) File "/Users/jacoblaws/opt/anaconda3/lib/python3.9/site-packages/monkey_patch/language_models/openai_api.py", line 70, in generate raise Exception("OpenAI API failed to generate a response") Exception: OpenAI API failed to generate a response

Some tests not passing

The following tests aren't passing

test_assertion_visitor/test_mock -- test_rl_equality
test_validator/test_instantiate -- test_generics, test_extended_generics

Startup bug when running todolist example

During application startup for the todolist example (examples/todolist/backend/main.py) the following error occurs

Error
monkey-patch\src\assertion_visitor.py", line 254, in extract_output raise NotImplementedError(f"Function {func_name} not handled yet") NotImplementedError: Function TodoItem not handled yet

[Use-case] Auto-tagger for new products at marketplace company

  • Context: Marketplace company lists hundreds of new products from third-parties on its marketplace and thousands are revised and updated each day
  • Problem: The onboarding and product team would spend hours reviewing product specifications and tagging them with category markers so they can 1) correctly be classified and 2) be easily searchable
  • Workflow
    • Retrieve product description and specs
    • Group category: Ski, Snowboarding, Helmet, Jacket, Pants, Socks, Shoes, Gloves, Tools, Other
    • Brand: Salomon, Burton, Helly Hansen, Atomic, Rossignol, Volkol, Blizzard, Elan
    • Features: Waterproof, Insulated, Moisture Wicking, Quick Drying, Powder, Sun-protective, Removable Liner,

[Use-case] Customer review analyzer for athletic apparel company

  • Context: Athletic apparel company sells on website, Amazon, and other merchants where customers leave reviews with star ratings
  • Problem: With the volume of reviews, there's lots of noise and it's hard to understand what the customers liked and didn't like
  • Workflow
    • Import review
    • Sentiment analysis: (-1 to 1)
    • Primary category (quality, durability, color, size, delivery, support, cost, none)
    • 2-sentence summary of review: (<2 sentences)

Support for constraints in Pydantic Field types

Pydantic fields include constraints on the content, e.g a character limit:

from pydantic import BaseModel, Field 
from typing import Optional

class MyPydanticModel(BaseModel):
    title: Optional[str] = Field(None, max_length=10)

We should support this, and all other Field contraints.

Expired Discord group link

Hi,

I'd like to join the Discord group but the link invite is listed as invalid.
Any way a fresh link can be provided?

Cache function invocations

Is your feature request related to a problem? Please describe.
Running any function is expensive due to the network hop required.

Describe the solution you'd like
Automatic caching to memory, or to a designated provider (e.g Redis)

Describe alternatives you've considered
Asking the user to use an existing function caching library such as @functools.cache

When using Bedrock models, the finetuning disabling is not working as expected

Describe the bug
When using a Bedrock teacher model, the finetuning should be disabled for the time being. However it is disabled wrongly.

To Reproduce
When using a Bedrock model, after 200 datapoints, tanuki will try to start a finetune job which results in a error

Expected behavior
When using Bedrock models, finetuning should not be used.

Support for streaming responses

LLM providers, specifically OpenAI, support streamed responses. We should support this.

Requirements:

  • Iterator typed outputs should be streamable by default.
  • Support 1000s of streamed outputs through last-n context management.
  • TDA should use 'in' syntax with iterator-as-list objects to specify the first N examples that should be streamed

Improved telemetry

Is your feature request related to a problem? Please describe.
Too many anonymous telemetry calls are being sent. This creates unnecessary overhead.

Describe the solution you'd like
Reduce the number of calls made, and make each one more informative while still being anonymized.
E.g:

  • Post an 8-byte hash of the API key
  • Post only the function signatures, not the actual inputs.
  • Only post when flushing the buffer to disk.

Embedding support for RAG usecases.

Patched Monkey functions should support operating in 'encoder-only' mode, in which a function returns a vector that can be used for indexing and retrieving from a VectorDB.

In this issue, I propose 2 syntactic options for supporting vector responses.

Embedding

Statically Decorated Functions

A new decorator: embed that informs the library that we want to return a vector, rather than a decoded type. By passing in an array-like type into the decorator, we can specify the type we want to construct, i.e np.array or list[float].

Example:

@monkey.embed(as=np.array)
def score_sentiment(input: str) -> Annotated[int, Field(gt=0, lt=10)]:
    """
    Scores the input between 0-10
    """

This returns an Embedding[np.array] when called.

Pro:

  • Can use the existing type hint to inform the embedding process

Con:

  • The type hint no longer informs the user what they actually get, i.e it impacts readability.

Custom Response Object Type

Example:

@monkey.patch
def score_sentiment(input: str) -> Embedding[np.array[float]]
    """
    Scores the input between 0-10
    """

This returns an Embedding[np.array] when called.

Pro:

  • Readable
  • Not creating another decorator

Con:

  • Cannot use the type hint when computing the embedding, possibly reducing performance.

Aligning

I order for us to be able to align the embeddings to our use-case, we need syntax that supports assert as well as the Embedding object.

@monkey.align
def align_score_sentiment():
    """Register several examples to align the embedding function"""

    assert score_sentiment("I love you") == score_sentiment("I really love you")
    assert score_sentiment("I hate you") != score_sentiment("I love you")

We can possibly consider custom methods on the Embedding object, i.e Embedding.similar, but this may not be necessary.

Finetuning

In order to fine-tune our aligned embedding functions on the backend, we can use constrastive finetuning - but this is not possible using the OpenAI API.

[Use-case] PDF classifier for freight brokerage company

PDF classifier for freight brokerage company

  • Depending on what type of document it was, trigger a workflow
  • Trucking company receiving various docs in email
  • Workflow
    • Import PDF
    • Convert to markdown
    • Invoice => send to Quickbooks
    • Shipping order => send to transportation management system
    • Customs documents => send to customs processor team
    • Permits => send to customs processor team
    • RFP / RFQ => send to quoting team
    • Sales contract => send to Hubspot
    • (if none of the above) => send to email support team

Support for Async

We should support asynchronous processing of patched function like as follows:

@monkey.patch
async def iterate_presidents() -> Iterable[str]:
    """"List the presidents of the United States""""


@monkey.patch
async def tell_me_more_about(topic: str) -> str:
   """Describe this in more detail"


# For each president listed, generate a description concurrently
start_time = time()
tasks = []
async for president in await iterate_presidents():
    # Use asyncio.create_task to schedule the coroutine for execution before awaiting it
    # This way descriptions will start being generated while the list of presidents is still being generated
    task = asyncio.create_task(tell_me_more_about(president))
    tasks.append(task)

descriptions = await asyncio.gather(*tasks)

Note: This was inspired by a Magentic example.

Align statements fail when pydantic object has a negative int or float property

The following align fails, where the float is negative. If the float is positive (0.5), the align does not fail. The same happens with ints, negative ints raise an error, positive ints do not

class ArticleSummary(BaseModel):
    sentiment: float = Field(..., ge=-10, le=1.0)

@tanuki.align
def align_analyze_article():

    html_content = "<head></head><body><p>Nvidia has made the terrible decision to buy ARM for $40b on 8th November. This promises to "\
                   "be an extremely important decision for the industry, even though it creates a monopoly.</p></body> "
    assert analyze_article_3(html_content, "nvidia") == ArticleSummary(
        sentiment=-0.5,
    )
align_analyze_article()
File "site-packages\tanuki\assertion_visitor.py", line 424, in extract_output
    raise NotImplementedError(f"Node type {type(node).__name__} not handled yet")
NotImplementedError: Node type UnaryOp not handled yet

Add support for local open-source models

Is your feature request related to a problem? Please describe.
We want to generalise the package by supporting local OSS models as students or teachers

Describe the solution you'd like
We need to support OSS models (Zephyr, Mistral, LLama, Qwen etc) as teacher or student. Currently we are only relying on OpenAI, whereas the library should be generalised to local OSS models

[Use-case] Discord bot for flagging promotional content

Input: Stream of Discord messages with the time + user ID

Output: Alert if promotional content, entity / product being promoted, user ID

Purpose: To flag to admins and moderators if content is very promotional or about selling stuff vs. genuine dialogue.

Support for pydantic classes in align statements

This currently errors out during parsing of aligns

class Location(BaseModel):
    """A representation of a US city and state"""

    city: str = Field(description="The city's proper name")
    state: str = Field(description="The state's two-letter abbreviation (e.g. NY)")


@Monkey.patch
def construct_object_1(input: str) -> Location:
    """
    Using the input, construct the output model in the type hint
    """

@Monkey.align
def align():
	assert construct_object_1("The Windy City") == Location(city="Chicago", state="IL")

Reason is it is saved as this into the align dataset
"{'args': ('The Windy City',), 'kwargs': {}, 'output': Location(city='Chicago', state='IL')}\r"

Track exceptions that are not caused by external dependencies

Currently, we don't know where our users are having issues. We should intercept exceptions that are not caused by third parties (i.e OpenAI) and submit them to the endpoint along with a truncated stack-trace (not including the names of any files on the users machine).

This will help us debug any user issues.

Support for enums

Currently validator.check_type does not support enum outputs types. If LLM outputs a valid integer, check_type outputs False where it should put True

class RequestIntent(Enum):
    REFUNDS = 1
    PRODUCT_QUESTIONS = 2
    ACCOUNT_OPENING = 3
    ORDER_STATUS = 4
    ORDER_MODIFICATIONS = 5


@Monkey.patch
def classify_request(input: str) -> RequestIntent: # Multiclass classification
    """
    Classify the incoming user request intent 
    """

Align statements do not support lists or tuples

The following align statements error out with inputs Lists or tuples

@Monkey.patch
def classify_sentiment(input: List[str]) -> Literal['Good', 'Bad', 'Neutral']: # Multi-class classification
    """
    Determine if the input is positive, negative or neutral sentiment
    """

@Monkey.align
def align():

    assert classify_sentiment(["I thought the ending was awesome"]) == 'Good'
    assert classify_sentiment(["The acting was horrendous"]) == 'Bad'
    assert classify_sentiment(["It was a dark and stormy night"]) == 'Neutral'

@Monkey.patch
def classify_sentiment(input: tuple) -> Literal['Good', 'Bad', 'Neutral']: # Multi-class classification
    """
    Determine if the input is positive, negative or neutral sentiment
    """

@Monkey.align
def align():

    assert classify_sentiment(("I thought the ending was awesome", "It was really good")) == 'Good'

[Use-case] Customer support bot for Twitter

  • Context: Company has a Twitter account and users tag with compliments, complaints, questions, and more
  • Problem: Responding to each of them manually and triaging them with customer support is incredibly time-consuming and costly to have 24/7 coverage. Building a chatbot from scratch is also time-consuming
  • Workflow
    • Tweet passed
    • Step 1
      • Respond to the issue with something that conveys sympathy and understanding and that they'll be in touch.
    • Step 2 (if question / complaint) - might be another function
      • Create a customer support ticket
      • Name
      • Issue
      • Urgency

Resolve Jupyter notebook system error with typed returns


OSError Traceback (most recent call last)
Cell In[67], line 1
----> 1 x = create_todolist_item("I need to go and visit Jeff at 3pm tomorrow")
2 print(x)

File ~/Paperplane/repos/monkey-patch.py/examples/wikipedia/../../src/monkey.py:202, in Monkey.patch..wrapper(*args, **kwargs)
200 @wraps(test_func)
201 def wrapper(*args, **kwargs):
--> 202 function_description = Register.load_function_description(test_func)
203 f = str(function_description.dict.repr() + "\n")
204 output = Monkey.language_modeler.generate(args, kwargs, Monkey.function_modeler, function_description)

File ~/Paperplane/repos/monkey-patch.py/examples/wikipedia/../../src/register.py:86, in Register.load_function_description(func_object)
80 # Extract class definitions for input and output types
81 input_class_definitions = {
82 param_name: get_class_definition(param_type)
83 for param_name, param_type in input_type_hints.items()
84 }
---> 86 output_class_definition = get_class_definition(output_type_hint)
88 return FunctionDescription(
89 name=func_object.name,
90 docstring=docstring,
(...)
94 output_class_definition=output_class_definition
95 )

File ~/Paperplane/repos/monkey-patch.py/examples/wikipedia/../../src/register.py:77, in Register.load_function_description..get_class_definition(class_type)
75 return [get_class_definition(arg) for arg in class_type.args if arg is not None]
76 elif inspect.isclass(class_type) and class_type.module != "builtins":
---> 77 return inspect.getsource(class_type)
78 return class_type.name

File /usr/local/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py:1262, in getsource(object)
1256 def getsource(object):
1257 """Return the text of the source code for an object.
1258
1259 The argument may be a module, class, method, function, traceback, frame,
1260 or code object. The source code is returned as a single string. An
1261 OSError is raised if the source code cannot be retrieved."""
-> 1262 lines, lnum = getsourcelines(object)
1263 return ''.join(lines)

File /usr/local/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py:1244, in getsourcelines(object)
1236 """Return a list of source lines and starting line number for an object.
1237
1238 The argument may be a module, class, method, function, traceback, frame,
(...)
1241 original source file the first line of code was found. An OSError is
1242 raised if the source code cannot be retrieved."""
1243 object = unwrap(object)
-> 1244 lines, lnum = findsource(object)
1246 if istraceback(object):
1247 object = object.tb_frame

File /usr/local/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py:1063, in findsource(object)
1055 def findsource(object):
1056 """Return the entire source file and starting line number for an object.
1057
1058 The argument may be a module, class, method, function, traceback, frame,
1059 or code object. The source code is returned as a list of all the lines
1060 in the file and the line number indexes a line in that list. An OSError
1061 is raised if the source code cannot be retrieved."""
-> 1063 file = getsourcefile(object)
1064 if file:
1065 # Invalidate cache if needed.
1066 linecache.checkcache(file)

File /usr/local/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py:940, in getsourcefile(object)
936 def getsourcefile(object):
937 """Return the filename that can be used to locate an object's source.
938 Return None if no way can be identified to get the source.
939 """
--> 940 filename = getfile(object)
941 all_bytecode_suffixes = importlib.machinery.DEBUG_BYTECODE_SUFFIXES[:]
942 all_bytecode_suffixes += importlib.machinery.OPTIMIZED_BYTECODE_SUFFIXES[:]

File /usr/local/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py:908, in getfile(object)
906 return module.file
907 if object.module == 'main':
--> 908 raise OSError('source code not available')
909 raise TypeError('{!r} is a built-in class'.format(object))
910 if ismethod(object):

OSError: source code not available

Add support for Embeddings from AWS Bedrock

We are planning to support returning embeddings from OpenAI. We should generalize this to support embeddings from Bedrock (and any other LLM providers that we choose to support).

Support for AWS Bedrock model stack

Is your feature request related to a problem? Please describe.
We want to generalise the package by supporting AWS Bedrock models as students or teachers (Claude2 for instance)

Describe the solution you'd like
We need to support AWS Bedrock models as teacher or student. Currently we are only relying on OpenAI, whereas the library should be more generalised

Support for Gunicorn

Currently only Uvicorn is supported, and therefore this project doesn't support multiple worker threads. We should add support for Gunicorn to support that popular usecase.

Output more informative OpenAI error messages

Currently other than if the key is invalid, if the retry mechanism fails for OpenAI generation, the error returned is a generic error message. We should return the specific error message that OpenAI return on the final retry

Optionally delegate classifiers to XGBoost for finetuning and inference

Is your feature request related to a problem? Please describe.
LLMs are extremely inefficient at classification. XGBoost is better if the data is available. We could use the aligned data from the LLM to train an XGBoost model, which would be much faster to run.

Describe the solution you'd like
When the output types denote a classification task (i.e where the goal is to sample one type in a union of literal types, or an enum), we optionally distil the teacher model into a decision forest using the XGBoost library.

Additional context
We could represent student models as optional packages, sort of like drivers, that the user could install through PIP.

E.g pip3 install tanuki.py[xgboost]

[Use-case] Data enricher for contacts

  • Context: Job marketplace company onboards thousands and thousands of applicants every day to find them tech jobs in the US
  • Problem: Applications often miss a lot of crucial information, which requires the onboarding team to manually reach out via email to request the info and enter the data in manually, costing tens of thousands of dollars in ops costs
  • Workflow
    • If fields are blank
    • Generate email requesting for that specific info
    • once email is responded to
    • Retrieve info to fill in blank fields
      • Name
      • City of residence
      • State of residence
      • Phone number
      • LNKD account
      • Most recent occupation
      • Date of most recent occupation
      • Name of reference
      • Contact of reference (or something like that)

Tests fail when run in-bulk, but pass when run one-by-one

Something is going wrong with the state management.

When we run tests all together, several tests fail with errors such as:

self = <unittest.mock._patch object at 0x119b38650>

    def get_original(self):
        target = self.getter()
        name = self.attribute
    
        original = DEFAULT
        local = False
    
        try:
            original = target.__dict__[name]
        except (AttributeError, KeyError):
            original = getattr(target, name, DEFAULT)
        else:
            local = True
    
        if name in _builtins and isinstance(target, ModuleType):
            self.create = True
    
        if not self.create and original is DEFAULT:
>           raise AttributeError(
                "%s does not have the attribute %r" % (target, name)
            )
E           AttributeError: <module 'test_finance' from '/Users/.../PycharmProjects/monkeyFunctions/tests/test_patch/test_finance.py'> does not have the attribute 'classify_sentiment_2'

This is likely due to how mocked functions are scoped.

[Use-case] CRM data extractor from inbound emails

  • Context: Company gets a lot of inbound emails and chats from interested leads that provide a varying amount of info about themselves and their use-cases
  • Problem: Inbound has to manually enter this info into their CRM and takes hundreds of hours + accuracy is poor
  • Workflow:
    • Email comes in markdown / HTML
    • Extract the following:
      • First name:
      • Last name:
      • Role:
      • Email:
      • Company:
      • Date Inbounded:
      • Painpoint:
      • Urgency: (low, medium, high, none)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.