Giter Club home page Giter Club logo

llama2.mojo's Introduction

llama2.🔥

llama2.mojo benchmark

Have you ever wanted to inference a baby Llama 2 model in pure Mojo? No? Well, now you can!

supported version: Mojo 24.1.0

With the release of Mojo, I was inspired to take my Python port of llama2.py and transition it to Mojo. The result? A version that leverages Mojo's SIMD & vectorization primitives, boosting the Python performance by nearly 250x. Impressively, after few native improvements the Mojo version outperforms the original llama2.c by 30% in multi-threaded inference. As well as it outperforms llama.cpp on baby-llama inference on CPU by 20%. This showcases the potential of hardware-level optimizations through Mojo's advanced features.

supported models

At the moment, the following models were successfully executed via llama2.mojo:

Models
stories 260K, 15M, 42M, 110M
Tinyllama-1.1B-Chat-v0.2

extensive benchmark on Apple M1 Max

mojo vs 6 programming languages

benchmark (updated)

Mac M1 Max (6 threads)

Model llama2.c (OMP/parallelized) llama2.mojo (parallelized) llama.cpp (CPU, 6 threads) llama2.py
stories15M.bin 730 tok/s 1025 tok/s 890 tok/s 38 tok/s (pypi)
stories42M.bin 270 tok/s 490 tok/s 420 tok/s -
stories110M.bin 102 tok/s 195 tok/s 187 tok/s -
TinyLlama-1.1B - 23 tok/s - -

Ubuntu 20.04, Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz, 6 cores, 12 threads

Model llama2.c (OMP/parallelized) llama2.mojo (parallelized) llama2.mojo (naive matmul) llama2.py
stories15M.bin 435 tok/s 440 tok/s 67.26 tok/s 1.3 tok/s
stories110M.bin 64 tok/s 63 tok/s 9.20 tok/s -
TinyLlama-1.1B 7.25 tok/s 7.25 tok/s - -

prerequisites

Make sure you have installed and configured mojo on your environment

Or you can use mojo playground to run this model.

try the 🔥 magic

HuggingFace - https://huggingface.co/spaces/radames/Gradio-llama2.mojo

feel the 🔥 magic

First, navigate to the folder when you keep your projects and clone this repository to this folder:

git clone https://github.com/tairov/llama2.mojo.git

Then, open the repository folder:

cd llama2.mojo

Now, let's download the model

wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin

Then, just run the Mojo

mojo llama2.mojo stories15M.bin -s 100 -n 256 -t 0.5 -i "Mojo is a language"

example output

num hardware threads:  6
SIMD vector width:  16
checkpoint size:  60816028 [ 57 MB ]
n layers:  6
vocab size:  32000
Mojo is a language that people like to talk. Hephones are very different from other people. He has a big book with many pictures and words. He likes to look at the pictures and learn new things.
One day, Mojo was playing with his friends in the park. They were running and laughing and having fun. Mojo told them about his book and his friends. They listened and looked at the pictures. Then, they saw a picture of a big, scary monster. They were very scared and ran away.
Mojo was sad that his book was gone. He told his friends about the monster and they all felt very sad. Mojo's friends tried to make him feel better, but nothing worked. Mojo never learned his language again.
achieved tok/s:  440.21739130434781

running via Docker

docker build --build-arg AUTH_KEY=<your-modular-auth-key> -t llama2.mojo .
docker run -it llama2.mojo

With Gradio UI:

# uncomment the last line in Dockerfile CMD ["python", "gradio_app.py"]
docker run -it -p 0.0.0.0:7860:7860 llama2.mojo

citing llama2.🔥

If you use or discuss llama2.mojo in your academic research, please cite the project to help spread awareness:

@misc{llama2.mojo,
  author = {Aydyn Tairov}, 
  title = {Inference Llama2 in one file of pure Mojo},
  year = {2023},
  month = {09},
  howpublished = {\url{https://github.com/tairov/llama2.mojo}},
  note = {Llama2 Mojo, MIT License}
}

We kindly request that you include a link to the GitHub repository in published papers. This will allow interested readers to easily find the latest updates and extensions to the project.

llama2.mojo aims to encourage academic research on efficient implementations of transformer architectures, the llama model, and applications of the mojo programming language. Citing the project helps growth of the knowledge community around these topics. We appreciate your support through referencing llama2.mojo!

play with Tinyllama-1.1B-Chat-v0.2

The TinyLlama is a 1.1B Llama model trained on 3 trillion tokens. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint. This is also the reason why we select it as the first model to support.

First, navigate to the folder when you keep your projects and clone this repository to this folder:

git clone https://github.com/tairov/llama2.mojo.git

Then, open the repository folder:

cd llama2.mojo

Now, let's download the model and the tokenizer

wget https://huggingface.co/kirp/TinyLlama-1.1B-Chat-v0.2-bin/resolve/main/tok_tl-chat.bin
wget https://huggingface.co/kirp/TinyLlama-1.1B-Chat-v0.2-bin/resolve/main/tl-chat.bin

Then, just run the Mojo

mojo llama2.mojo tl-chat.bin \
    -z tok_tl-chat.bin \
    -n 256 -t 0 -s 100 -i "<|im_start|>user\nGive me a python function to generate Fibonacci sequence<|im_end|>\n<|im_start|>assistant\n"

example output

num hardware threads:  6
SIMD vector width:  16
checkpoint size:  4400767004 [ 4196 MB ]
n layers:  22
vocab size:  32003
<|im_start|>user
Give me a python function to generate Fibonacci sequence<|im_end|>
<|im_start|>assistant
Sure, here's a Python function that generates the Fibonacci sequence:

def fibonacci(n):
    if n <= 0:
        return 0
    elif n == 1:
        return 1
    else:
        return fibonacci(n-1) + fibonacci(n-2)

This function takes an integer n as a parameter and returns the next Fibonacci number. It uses a recursive approach to calculate the Fibonacci numbers, starting from 0 and working up. The function returns the value it found at the current level of the recursion, which can be either 0 or a Fibonacci number.

license

MIT

llama2.mojo's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

llama2.mojo's Issues

HuggingFace demo not working

Hi,

The HuggingFace demo of this project is not working. It says:

Build failed with exit code: 1
...
--> ERROR: process "/bin/sh -c curl https://get.modular.com | MODULAR_AUTH=$AUTH_KEY sh -     && modular install mojo" did not complete successfully: exit code: 1

Killed

I downloaded the repo and was super happy to see the story model work!
Then I looked down and saw the chat so I went and installed it via the wget that was provided in the readme
But, when I try to run it, this happened:

username@username:~/mojo/llama2.mojo$ mojo llama2.mojo tl-chat.bin \
    -r falcon \
    -z tok_tl-chat.bin \
    -n 256 -t 0 -s 100 -i "<|im_start|>user\nGive me a python function to generate Fibonacci sequence<|im_end|>\n<|im_start|>assistant\n"
num hardware threads:  4
SIMD vector width:  16
Killed

(sorry, I accidentally opened the issue before I finished typing it 😢 )

Actually, this even happens if I follow all the instructions, download again and all, in a new folder

it is taking python 3.6 insted of 3.9.18

/home/y3rawat/.modular/pkg/packages.modular.com_mojo/bin/mojo /home/y3rawat/mojo/llama2.mojo
/home/y3rawat/mojo/llama2.mojo:10:6: error: unable to locate module 'read'
from read import BufReader, File
^
/home/y3rawat/.modular/pkg/packages.modular.com_mojo/bin/mojo: error: failed to parse the provided Mojo
(python38) y3rawat@y3rawat-ASUS-TUF-Gaming-F15-FX507ZC4-FX507ZC4:~/mojo$

TODO: Support for gguf models

hey team, incredible work being done here.

Wondering if you only support .bin models, or would it also manage to work with gguf quantized models as well.

If not, then that's a real feature request. Mostly everyone uses gguf models to work nowadays, as they are easier to run on consumer-grade hardware.

thanks.

mojo llama2.mojo error

mojo 1.0.0+601 已从 Canonical IS Snaps 安装
FileNotFoundError: [Errno 2] No such file or directory: 'juju': 'juju'

Unroll vectorisation

Hi there,

awesome port and demonstrator. Have you compared the performance of vectorize and vectorize_unroll?

While tinkering around with demanding algos I saw that unrolling the partial loop 12x I got a 10% performance increase. Maybe enough to beat cpp? 😁

Segmentation falut error when it is built as binary

Thanks for your fatastic project. For curiosity, I tried to build it as binary. It seems to be built at first. But it didn't work. It showed a message like set python path. But after I set its environment variable, a segmentation falut error occurred. I think it came from mojo builder, maybe. My enviroment is wsl in Windows 11.

In some case, we haven't make full use of threads

Althoug I have 6 core cpu, I actually have 12 threads.
image

In our code, we take it for granted that num_cores = threads.
print("num hardware threads: ", num_cores()) and self.rt = Runtime(num_cores() // 2)

So, I think we haven't make full use of the threads.

Bug

Stumbled on this while trying to run the code on WSL Ubuntu

num parallel workers: 2  SIMD width: 16
checkpoint size:  60816028 [ 57 MB ] | n layers: 6 | vocab size: 32000
terminate called after throwing an instance of 'std::system_error'
  what():  Resource temporarily unavailable

Can a RestAPI be built

The fast llama2 reasoning is really helpful to us, but how to add a RestAPI to mojo, preferably one that is compatible with the openai interface? I don't know much about mojo. I know how to use python flask. Can you help me?

How it relates to LLAMA?

Ok, I installed mojo, cloned your repo and run the test. It works, congrats! But how all of this relates to LLAMA? Nothing happened when I was trying to run the LLAMA2 itself:

alex@NLDW4-5-20-11:~/ai/llama2.mojo$ mojo llama2.mojo /ai/llama.cpp/models/ggml-model-q4_1.bin -s 100 -n 256 -t 0.5 -i "Llama is an animal"
num hardware threads: 12
SIMD vector width: 16
checkpoint size: 4238459520
Killed
alex@NLDW4-5-20-11:
/ai/llama2.mojo$ mojo llama2.mojo ~/ai/llama.cpp/models/ggml-model-q4_1.bin -s 100 -n 256 -t 4 -i "Llama is an animal"
num hardware threads: 12
SIMD vector width: 16
checkpoint size: 4238459520
Killed

I don't know what does it mean -t 0.5 (I suppose threads), I've been trying -t 4 and again without results.

The the clue here is how to run LLAMA 2 using this new language called MOJO. And if you made a MOJO wrapper for the LLAMA/LLAMA2 models, please provide the instruction on how to run the model using this wrapper.

Thank you.

Do you want to grow this project?

Idk what your plan is with this project so I just wanted to ask if you want to grow it and advance into enabling

  • more available models (7B, 13B), CodeLLama, ...
  • support quantized models
  • improved abstractions over multiple files
  • gpu support
  • documentation
  • jsonformer or guidance on top of this
  • langchain or openai api integration

We could create different TODO Issues for the featues to enable work by the community.
If you dont want to grow it maybe we could create a community fork building on top of it.
I really like the idea of doing inference in mojo so really greatful for this project and I think this could be a good opportunity to learn more about mojo by building some features :)

Question about models

I found this interesting project via the 'AI Anywhere' channel on YouTube. I've installed Modular and Mojo, and successfully run your test on an under powered mini computer with only a 1.5GHz 4 core Intel Celeron cpu, running Ubuntu 20.04.6, and this achieved 32.5 tok/s.

I'm an LLM newbie so my questions may appear stupid!! Can this project be run with other models?

I tried the following:
mojo llama2.mojo /home/ezyweb/Public/chatpdf1/models/llama-2-7b-chat.Q4_K_M.gguf -s 100 -n 256 -t 0.5 -i "What is Llama 2"

And got the result:
num hardware threads: 4 SIMD vector width: 8 checkpoint size: 4081004224 [ 3891 MB ] Killed

Is that likely an under resourced hardware issue or is the project not compatible with .gguf models?

Where is 'tokenizer.bin'?

Hi, really impressive work this.
Unfortunately,I couldn't run your code.
Your code is execute read 'tokenizer.bin'
but it isn't provided.
Please tell me where is that.

error: unable to locate module 'read'

On Mac M1, build

from read import BufReader, File
     ^
mojo: error: failed to parse the provided Mojo

version:

% mojo --version
mojo 0.4.0 (9e33b013)

Prompt input?

Hi @tairov great work here!
I notice you're not using the prompt, it would be fun to be able to input it, I could add it to the #10

Mojo: error: execution exited with a non-zere result:1

The error occurs when running llama.mojo
My environment: Python 3.10(ubuntu22.04)

console imformation bellow:

mojo llama2.mojo stories15M.bin -s 100 -n 256 -t 0.5 -i "Llama is an animal"
num hardware threads: 192 SIMD vector width: 16
Unhandled exception caught during execution: An error occurred in Python.
mojo: error: execution exited with a non-zero result: 1

Turn on discussions

It might be worth turning on discussions. It would be helpful to discuss performance improvements so there is a history of what people have tried and any benchmarks run.

How to use mojo playground to run this model?

Hi,

Maybe a stupid question, I couldn't find a way to execute shelll cmd in mojo player ground's console or notebook. How did you manage to run this project in the mojo playground?

Thanks!

typo: Just a little fixing for the rnd_seed

#Unhandled exception caught during execution: String is not convertible to integer.
rng_seed = atol(args[i + 1])

if args[i] == "-i":
      prompt = args[i + 1]
      rng_seed = atol(args[i + 1]) #line for if args[i] == "-s":

Segmentation fault on M1 Max 32GB

> MOJO_PYTHON_LIBRARY="/Users/shroominic/dev/miniforge3/lib" mojo llama2.mojo stories110M.bin -i "hello"
num parallel workers: 10  SIMD width: 8

Stack dump:
0.      Program arguments: mojo llama2.mojo stories110M.bin -i hello
Stack dump without symbol names (ensure you have llvm-symbolizer in your PATH or set the environment var `LLVM_SYMBOLIZER_PATH` to point to it):
0  mojo                     0x0000000100f79990 llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) + 56
1  mojo                     0x0000000100f77af0 llvm::sys::RunSignalHandlers() + 112
2  mojo                     0x0000000100f7a02c SignalHandler(int) + 344
3  libsystem_platform.dylib 0x000000018dd06a24 _sigtramp + 56
4  libsystem_platform.dylib 0x0000000280058ac8 _sigtramp + 4063568092
5  libsystem_platform.dylib 0x000000028005807c _sigtramp + 4063565456
6  mojo                     0x00000001012ca24c M::KGEN::ExecutionEngine::runProgram(llvm::StringRef, llvm::StringRef, llvm::function_ref<M::ErrorOrSuccess (void*)>) + 1156
7  mojo                     0x0000000100ed3c64 run(M::State const&) + 3980
8  mojo                     0x0000000100ebcb2c main + 1672
9  dyld                     0x000000018d97ff28 start + 2236
[79626:9834787:20231019,201441.806415:WARNING crash_report_exception_handler.cc:257] UniversalExceptionRaise: (os/kern) failure (5)

[1]    79624 segmentation fault  MOJO_PYTHON_LIBRARY="/Users/shroominic/dev/miniforge3/lib" mojo llama2.mojo  -i

Not sure what this all means but I am just trying to run this on my MacBook Pro based on the README.md ...
I installed mojo and I am able to run basic hello world scripts and I've put the path of my conda base env.

Here another try with LLVM Symbolizer and the smaller model:

Stack dump:
0.      Program arguments: mojo llama2.mojo stories15M.bin -s 42 -m 256 -t 0.5 -i hello -z tokenizer.bin
 #0 0x0000000100729990 llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) (/Users/shroominic/.modular/pkg/packages.modular.com_mojo/bin/mojo+0x1000c5990)
 #1 0x0000000100727af0 llvm::sys::RunSignalHandlers() (/Users/shroominic/.modular/pkg/packages.modular.com_mojo/bin/mojo+0x1000c3af0)
 #2 0x000000010072a02c SignalHandler(int) (/Users/shroominic/.modular/pkg/packages.modular.com_mojo/bin/mojo+0x1000c602c)
 #3 0x000000018dd06a24 (/usr/lib/system/libsystem_platform.dylib+0x18042ea24)
 #4 0x0000000280052c7c 
 #5 0x0000000280051fc4 
 #6 0x0000000100a7a24c M::KGEN::ExecutionEngine::runProgram(llvm::StringRef, llvm::StringRef, llvm::function_ref<M::ErrorOrSuccess (void*)>) (/Users/shroominic/.modular/pkg/packages.modular.com_mojo/bin/mojo+0x10041624c)
 #7 0x0000000100683c64 run(M::State const&) (/Users/shroominic/.modular/pkg/packages.modular.com_mojo/bin/mojo+0x10001fc64)
 #8 0x000000010066cb2c main (/Users/shroominic/.modular/pkg/packages.modular.com_mojo/bin/mojo+0x100008b2c)
 #9 0x000000018d97ff28 
[35841:10187750:20231020,185712.713361:WARNING crash_report_exception_handler.cc:257] UniversalExceptionRaise: (os/kern) failure (5)
[1]    35839 segmentation fault  MOJO_PYTHON_LIBRARY= LLVM_SYMBOLIZER_PATH= mojo llama2.mojo stories15M.bin -s

Increase number of SIMD registers used to speed-up the execution

I spent some time investigating why parallelized + vectorized version of matmul is slower than only vectorized.

Older Matmul examples showed that multi-core + vector was faster. Still, for me, the Matmul notebook example on Playground and Matmul example from the repo run on the GitHub Codespaces instance (4 cores, 16GB) showed that the multi-core version was slower.

I tried two commands: mojo examples/matmul.mojo and mojo build examples/matmul.mojo + run the binary. They had the same results, multi-core slower. In addition, using htop, I also made sure that the multi-core is utilizing all cores.

I found this PR - modularml/mojo#742 where you could see the value for vector width you get from simdwidthof is multiplied. In the case of the GitHub Codespace instance, my base value from simdwidthof was 8, I benchmarked higher values like 16 (2x), 32 (4x), and 64 (8x). You can see the results below:

image

I believe adjusting nelts value should bring additional speed-ups.

alias nelts = simdwidthof[DType.float32]()

CPU details:

System information: 
    OS          :  linux
    CPU         :  znver3
    Arch        :  x86_64-unknown-linux-gnu
    Num Cores   :  4
    CPU Features:  avx2

Error executing mojo llama2.mojo

After having installed mojo (working), and llama2 as described, running mojo llama2.mojo on ubuntu 22.04 with 16 cores, I get:

llama2.mojo $ mojo llama2.mojo
num hardware threads: 16 SIMD vector width: 16
checkpoint size: 60816028
Unhandled exception caught during execution: An error occurred in Python.
mojo: error: execution exited with a non-zero result: 1

Make the tokenizer better

I'm trying to make llama2.mojo work on tinyllama-1.1B.
Which is a GQA and not tie_embedding model.
Now I have finish converting the model and modify part of llama2.mojo(llama.cpp,llama.c).
I have noticed that our tokenizer is not stable compared with huggingface tokenizer.

Dockerfile errors

First of all: Amazing jog! Congrats!

I had some errors when I tried to run the Docker version. When I fix the first, the second appears:

  1. The authentication fails. I believe Modular changes the way authentication works.
  2. Mojo tries to use VENV but "conda init" is called after that.

I'd submit a pull request with the changes that works to me: #70

My Setup:

  • Intel i5-13600K
  • 32 GB RAM
  • WSL2 Ubuntu 22.04 LTS
  • Windows 11 Pro Insider Preview x64 (v. 22H2 Build Windows 11 Pro Insider Preview)
  • Docker Desktop 4.22.0 (117440)

Replicating Steps:
Inside Repo Directory:

docker build --build-arg AUTH_KEY=MY_MODULAR_KEY -t llama2.mojo .

Terminal Print (Error 1):

17.15 Setting up modular (0.2.1) ...
17.16 Processing triggers for libc-bin (2.31-0ubuntu9.12) ...
17.18 sh: 80: [[: not found
17.18   __  __           _       _
17.18  |  \/  | ___   __| |_   _| | __ _ _ __
17.18  | |\/| |/ _ \ / _` | | | | |/ _` | '__|
17.18  | |  | | (_) | (_| | |_| | | (_| | |
17.18  |_|  |_|\___/ \__,_|\__,_|_|\__,_|_|
17.18 
17.18 Welcome to the Modular CLI!
17.18 For info about this tool, type "modular --help".
17.18 
17.18 To install Mojo🔥, type "modular install mojo".
17.18 
17.18 For Mojo documentation, see https://docs.modular.com/mojo.
17.18 To chat on Discord, visit https://discord.gg/modular.
17.18 To report issues, go to https://github.com/modularml/mojo/issues.
21.66 modular: error: please run `modular auth` before attempting to install a package
------
Dockerfile:52
--------------------
  51 |     
  52 | >>> RUN curl https://get.modular.com | MODULAR_AUTH=$AUTH_KEY sh - \
  53 | >>>     && modular install mojo 
  54 |     
--------------------
ERROR: failed to solve: process "/bin/sh -c curl https://get.modular.com | MODULAR_AUTH=$AUTH_KEY sh -     && modular install mojo" did not complete successfully: exit code: 1

Terminal Print (Error 2):

 => ERROR [ 6/15] RUN modular install mojo                                                                                                                                                                40.6s 
------                                                                                                                                                                                                          
 > [ 6/15] RUN modular install mojo:                                                                                                                                                                            
40.33 The virtual environment was not created successfully because ensurepip is not                                                                                                                             
40.33 available.  On Debian/Ubuntu systems, you need to install the python3-venv
40.33 package using the following command.
40.33 
40.33     apt install python3.8-venv
40.33 
40.33 You may need to use sudo with that command.  After installing the python3-venv
40.33 package, recreate your virtual environment.
40.33 
40.33 Failing command: ['/home/user/.modular/pkg/packages.modular.com_mojo/venv/bin/python3', '-Im', 'ensurepip', '--upgrade', '--default-pip']
40.33 
40.56 modular: error: failed to run python: 
40.56 # Found release for https://packages.modular.com/mojo @ 0.4.0
40.56 # Installing to /home/user/.modular/pkg/packages.modular.com_mojo
40.56 # Downloading artifacts. Please wait...
40.56 # Downloads complete, setting configs...
40.56 # Configs complete, running post-install hooks...
------
Dockerfile:54
--------------------
  52 |     RUN curl https://get.modular.com | MODULAR_AUTH=$AUTH_KEY sh -
  53 |     RUN modular auth $AUTH_KEY
  54 | >>> RUN modular install mojo 
  55 |     
  56 |     RUN useradd -m -u 1000 user
--------------------
ERROR: failed to solve: process "/bin/sh -c modular install mojo" did not complete successfully: exit code: 1

Getting very strange response when trying the second example in README.md

~/src/AI/mojo/llama2.mojo$ mojo llama2.mojo tl-chat.bin \
    -r falcon \
    -z tok_tl-chat.bin \
    -n 256 -t 0 -s 100 -i "<|im_start|>user\nGive me a python function to generate Fibonacci sequence<|im_end|>\n<|im_start|>assistant\n"
num hardware threads:  12
SIMD vector width:  16
checkpoint size:  4400767004 [ 4196 MB ]
n layers:  22
vocab size:  32003
<|im_start|>user
Give me a python function to generate Fibonacci sequence<|im_end|>
<|im_start|>assistant
¿Quiero debera.io|efes<|
|- [aquíntena|
|-|re|re|
|-|
|-ichas|[estructurañiñu|implementa.py|
|esínda|
¿Quiero|

|Olahi|

Does anyone know how to resolve this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.