Giter Club home page Giter Club logo

chatglm-6b's People

Contributors

adambear avatar binary-husky avatar chenqy4933 avatar cherrysaber avatar cjld avatar coderabbit214 avatar dlutkaka avatar duzx16 avatar ganymedenil avatar hiyouga avatar holk-h avatar hwaking avatar imclumsypanda avatar initialencounter avatar is avatar jsl9208 avatar nczkevin avatar oedosoldier avatar rainatam avatar sengxian avatar songxxzp avatar tuteng0915 avatar xiao9905 avatar yanqiangmiffy avatar yfyang86 avatar yvrjsharma avatar yysirs avatar zhangerling avatar zrzrzrzrzrzrzr avatar zwy4896 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatglm-6b's Issues

gradio没有显示

你好,按照readme中的步骤构建Web Server,服务构建成功了,但是浏览器是一片空白的,怎么解决呢?

RuntimeError: Library cudart is not initialized

quantize to int 4 and got error:

D:\chatglm6b\ChatGLM-6B>python web_demo.py
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 8/8 [00:05<00:00,  1.57it/s]
Traceback (most recent call last):
  File "D:\chatglm6b\ChatGLM-6B\web_demo.py", line 5, in 
    model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().quantize(4).cuda()
  File "C:\Users\wizard/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1156, in quantize
    self.transformer = quantize(self.transformer, bits)
  File "C:\Users\wizard/.cache\huggingface\modules\transformers_modules\local\quantization.py", line 147, in quantize
    layer.attention.query_key_value = QuantizedLinear(
  File "C:\Users\wizard/.cache\huggingface\modules\transformers_modules\local\quantization.py", line 130, in __init__
    self.weight = compress_int4_weight(self.weight)
  File "C:\Users\wizard/.cache\huggingface\modules\transformers_modules\local\quantization.py", line 71, in compress_int4_weight
    kernels.int4WeightCompression(
  File "C:\Users\wizard\AppData\Local\Programs\Python\Python310\lib\site-packages\cpm_kernels\kernels\base.py", line 48, in __call__
    func = self._prepare_func()
  File "C:\Users\wizard\AppData\Local\Programs\Python\Python310\lib\site-packages\cpm_kernels\kernels\base.py", line 36, in _prepare_func
    curr_device = cudart.cudaGetDevice()
  File "C:\Users\wizard\AppData\Local\Programs\Python\Python310\lib\site-packages\cpm_kernels\library\base.py", line 72, in wrapper
    raise RuntimeError("Library %s is not initialized" % self.__name)
RuntimeError: Library cudart is not initialized 

不支持arm架构的mac os?

when i run code i got this:

raise RuntimeError("Unknown platform: %s" % sys.platform)
RuntimeError: Unknown platform: darwin 

我是m1的macbook。
有解决办法吗?
(是arm不配吗哈哈哈)

是因为torch里面没有arm-mac-osx的下载吗?

The answer is mixed English-Chinese

When I ask some questions, the answer is a mixture of English and Chinese, any solution to this problem?

For example:

I ask: let us play chess! I will take the first step. my move is e4.
ChatGLM-6B: Sure, let's play! Your move is e4, and I'll go with d4 as my first move. It's always good to start with a pawn升变, and d4 is a good choice because it升变后可以攻击对手的中车, which is a weakness in many players' plans.

启动失败 Symbol cudaLaunchKernel not found

下面是报错信息
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Symbol cudaLaunchKernel not found in C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\cudart64_65.dll

请问会开放训练代码么?

还是说,transformers加载模型后,后续训练代码,用rlhf,自行实现,再在自己语料上,继续训练即可?

调用的dll中发生错误

报错:AttributeError: C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\cudart64_30_9.dll: undefined symbol: cudaDeviceGetAttribute
还有提示:Symbol cudaGetErrorName not found in C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\cudart64_30_9.dll
Symbol cudaPeekAtLastError not found in C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\cudart64_30_9.dll

我尝试过重新下载dll,但是不起作用。不论是否量化都会报错,使用的是NVIDIA GeForce GTX 1060

模型列表里的8个模型都有什么区别吗?

pytorch_model-00001-of-00008.bin
pytorch_model-00002-of-00008.bin
pytorch_model-00003-of-00008.bin
pytorch_model-00004-of-00008.bin
pytorch_model-00005-of-00008.bin 
pytorch_model-00006-of-00008.bin
pytorch_model-00007-of-00008.bin
pytorch_model-00008-of-00008.bin

Runtime Error

您好!运行模型出现以下的报错信息:
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: nvrtc: error: failed to open libnvrtc-builtins.so.11.5.
Make sure that libnvrtc-builtins.so.11.5 is installed correctly.

Ask for training code

Hi, I want to train the model in my domain dataset and research based on ChatGLM model.Could you share the training code of ChatGLM?

显示权重不对

RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling cublasGemmEx( handle, opa, opb, m, n, k, &falpha, a, CUDA_R_16F, lda, b, CUDA_R_16F, ldb, &fbeta, c, CUDA_R_16F, ldc, CUDA_R_32F, CUBLAS_GEMM_DFALT_TENSOR_OP)

transformer4.26.1安装不上

ERROR: Could not find a version that satisfies the requirement transformers==4.26.1 (from versions: 0.1, 2.0.0, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0, 2.7.0, 2.8.0, 2.9.0, 2.9.1, 2.10.0, 2.11.0, 3.0.0, 3.0.1, 3.0.2, 3.1.0, 3.2.0, 3.3.0, 3.3.1, 3.4.0, 3.5.0, 3.5.1, 4.0.0rc1, 4.0.0, 4.0.1, 4.1.0, 4.1.1, 4.2.0, 4.2.1, 4.2.2, 4.3.0rc1, 4.3.0, 4.3.1, 4.3.2, 4.3.3, 4.4.0, 4.4.1, 4.4.2, 4.5.0, 4.5.1, 4.6.0, 4.6.1, 4.7.0, 4.8.0, 4.8.1, 4.8.2, 4.9.0, 4.9.1, 4.9.2, 4.10.0, 4.10.1, 4.10.2, 4.10.3, 4.11.0, 4.11.1, 4.11.2, 4.11.3, 4.12.0, 4.12.1, 4.12.2, 4.12.3, 4.12.4, 4.12.5, 4.13.0, 4.14.0, 4.14.1, 4.15.0, 4.16.0, 4.16.1, 4.16.2, 4.17.0, 4.18.0)
ERROR: No matching distribution found for transformers==4.26.1

torch版本多少

RuntimeError: probability tensor contains either inf, nan or element < 0

模型加速

请问有没有方法能加快在模型推断速度?
我现在有两张a100,有没有方法能实现比单张a100更快的推断速度?

ValueError: Unrecognized configuration class <class 'transformers_modules.local.configuration_chatglm.ChatGLMConfig'> to build an AutoTokenizer.

File "web_demo.py", line 4, in
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
File "/opt/anaconda3/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 686, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers_modules.local.configuration_chatglm.ChatGLMConfig'> to build an AutoTokenizer.
Model type should be one of AlbertConfig, BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BlipConfig, BloomConfig, CamembertConfig, CanineConfig, ChineseCLIPConfig, CLIPConfig, CLIPSegConfig, CodeGenConfig, ConvBertConfig, CTRLConfig, Data2VecTextConfig, DebertaConfig, DebertaV2Config, DistilBertConfig, DPRConfig, ElectraConfig, ErnieConfig, EsmConfig, FlaubertConfig, FNetConfig, FSMTConfig, FunnelConfig, GitConfig, GPT2Config, GPT2Config, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GroupViTConfig, HubertConfig, IBertConfig, JukeboxConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LiltConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MarianConfig, MBartConfig, MegatronBertConfig, MobileBertConfig, MPNetConfig, MT5Config, MvpConfig, NezhaConfig, NystromformerConfig, OneFormerConfig, OpenAIGPTConfig, OPTConfig, OwlViTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, RagConfig, RealmConfig, ReformerConfig, RemBertConfig, RetriBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, Speech2TextConfig, Speech2Text2Config, SplinterConfig, SqueezeBertConfig, SwitchTransformersConfig, T5Config, TapasConfig, TransfoXLConfig, ViltConfig, VisualBertConfig, Wav2Vec2Config, Wav2Vec2ConformerConfig, WhisperConfig, XCLIPConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, YosoConfig.

runtime error when executing model = AutoModel.from_pretrained...

model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().quantize(4).cuda()

>>> model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().quantize(4).cuda()
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:09<00:00,  1.16s/it]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/chase/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b/9d1509a1ade49240535535aa020232c8a4b1c114/modeling_chatglm.py", line 1156, in quantize
    self.transformer = quantize(self.transformer, bits)
  File "/home/chase/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b/9d1509a1ade49240535535aa020232c8a4b1c114/quantization.py", line 147, in quantize
    layer.attention.query_key_value = QuantizedLinear(
  File "/home/chase/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b/9d1509a1ade49240535535aa020232c8a4b1c114/quantization.py", line 130, in __init__
    self.weight = compress_int4_weight(self.weight)
  File "/home/chase/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b/9d1509a1ade49240535535aa020232c8a4b1c114/quantization.py", line 71, in compress_int4_weight
    kernels.int4WeightCompression(
  File "/home/chase/Documents/miniconda3/envs/GLM/lib/python3.9/site-packages/cpm_kernels/kernels/base.py", line 48, in __call__
    func = self._prepare_func()
  File "/home/chase/Documents/miniconda3/envs/GLM/lib/python3.9/site-packages/cpm_kernels/kernels/base.py", line 36, in _prepare_func
    curr_device = cudart.cudaGetDevice()
  File "/home/chase/Documents/miniconda3/envs/GLM/lib/python3.9/site-packages/cpm_kernels/library/base.py", line 72, in wrapper
    raise RuntimeError("Library %s is not initialized" % self.__name)
RuntimeError: Library cudart is not initialized

platform
rtx 3070 8g
ubuntu 22.10
python 3.9 in conda virtual environment

Loading checkpoint shards failed

Loading checkpoint shards: 62%, then progress be killed

In [3]: model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Downloading (…)l-00001-of-00008.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 1.90G/1.90G [00:32<00:00, 59.1MB/s]
Downloading (…)l-00002-of-00008.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 1.88G/1.88G [00:32<00:00, 57.6MB/s]
Downloading (…)l-00003-of-00008.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 1.98G/1.98G [00:33<00:00, 59.1MB/s]
Downloading (…)l-00004-of-00008.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 1.91G/1.91G [00:34<00:00, 55.4MB/s]
Downloading (…)l-00005-of-00008.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 1.88G/1.88G [00:34<00:00, 54.4MB/s]
Downloading (…)l-00006-of-00008.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 1.88G/1.88G [00:35<00:00, 52.5MB/s]
Downloading (…)l-00007-of-00008.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 1.07G/1.07G [00:18<00:00, 58.2MB/s]
Downloading (…)l-00008-of-00008.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 1.23G/1.23G [00:21<00:00, 57.0MB/s]
Loading checkpoint shards: 62%|██████████████████████████████████████████████████████████████████████ | 5/8 [00:07<00:04, 1.49s/it]
[1] 128976 killed ipython

6GB显卡提示显存不足

显卡:3060laptop 6GB
报错:RuntimeError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 6.00 GiB total capacity; 5.27 GiB already allocated; 0 bytes free; 5.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

How to prepare inputs for question answering

How to prepare input_ids'', position_ids'', and ``attention_mask'' for question answering?
eg.

questions = ['Where is the capital of China?', 'Why do you like Chinese food?']
answers =['The capital of China is Beijing.', 'There are eight major Chinese cuisines, which greatly satisfy my taste.']

output = model(input_ids, position_ids, attention_mask)

How to get input_ids, position_ids, and attention_mask? I find it is different from other models (GPT2, OPT, etc.). Maybe because of the special tokenizer?

ValueError: Unrecognized configuration class <class 'transformers_modules.local.configuration_chatglm.ChatGLMConfig'> to build an AutoTokenizer.

got error running the model:

D:\chatglm6b\ChatGLM-6B>python web_demo.py
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
  File "D:\chatglm6b\ChatGLM-6B\web_demo.py", line 4, in 
    tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
  File "C:\Users\wizard\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 686, in from_pretrained
    raise ValueError(
ValueError: Unrecognized configuration class  to build an AutoTokenizer.
Model type should be one of AlbertConfig, BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BlipConfig, BloomConfig, CamembertConfig, CanineConfig, ChineseCLIPConfig, CLIPConfig, CLIPSegConfig, CodeGenConfig, ConvBertConfig, CTRLConfig, Data2VecTextConfig, DebertaConfig, DebertaV2Config, DistilBertConfig, DPRConfig, ElectraConfig, ErnieConfig, EsmConfig, FlaubertConfig, FNetConfig, FSMTConfig, FunnelConfig, GitConfig, GPT2Config, GPT2Config, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GroupViTConfig, HubertConfig, IBertConfig, JukeboxConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LiltConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MarianConfig, MBartConfig, MegatronBertConfig, MobileBertConfig, MPNetConfig, MT5Config, MvpConfig, NezhaConfig, NystromformerConfig, OneFormerConfig, OpenAIGPTConfig, OPTConfig, OwlViTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, RagConfig, RealmConfig, ReformerConfig, RemBertConfig, RetriBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, Speech2TextConfig, Speech2Text2Config, SplinterConfig, SqueezeBertConfig, SwitchTransformersConfig, T5Config, TapasConfig, TransfoXLConfig, ViltConfig, VisualBertConfig, Wav2Vec2Config, Wav2Vec2ConformerConfig, WhisperConfig, XCLIPConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, YosoConfig.

D:\chatglm6b\ChatGLM-6B>

没有n卡情况下无法运行 ReadMe中的纯cpu模式方案无效

PS D:\ppp\ChatGLM-6B> python web_demo.py
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
  File "D:\ppp\ChatGLM-6B\web_demo.py", line 5, in <module>
    model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).float()
  File "C:\Program Files\Python310\lib\site-packages\transformers\models\auto\auto_factory.py", line 455, in from_pretrained
    model_class = get_class_from_dynamic_module(
  File "C:\Program Files\Python310\lib\site-packages\transformers\dynamic_module_utils.py", line 363, in get_class_from_dynamic_module
    final_module = get_cached_module_file(
  File "C:\Program Files\Python310\lib\site-packages\transformers\dynamic_module_utils.py", line 274, in get_cached_module_file
    get_cached_module_file(
  File "C:\Program Files\Python310\lib\site-packages\transformers\dynamic_module_utils.py", line 237, in get_cached_module_file
    modules_needed = check_imports(resolved_module_file)
  File "C:\Program Files\Python310\lib\site-packages\transformers\dynamic_module_utils.py", line 129, in check_imports
    importlib.import_module(imp)
  File "C:\Program Files\Python310\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "C:\Program Files\Python310\lib\site-packages\cpm_kernels\__init__.py", line 1, in <module>
    from . import library
  File "C:\Program Files\Python310\lib\site-packages\cpm_kernels\library\__init__.py", line 2, in <module>
    from . import cuda
  File "C:\Program Files\Python310\lib\site-packages\cpm_kernels\library\cuda.py", line 7, in <module>
    cuda = Lib.from_lib("cuda", ctypes.WinDLL("nvcuda.dll"))
  File "C:\Program Files\Python310\lib\ctypes\__init__.py", line 374, in __init__
    self._handle = _dlopen(self._name, mode)
FileNotFoundError: Could not find module 'nvcuda.dll' (or one of its dependencies). Try using the full path with constructor syntax.

按readme改成 model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).float()
还是报错

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.