Giter Club home page Giter Club logo

corenet's Issues

Request to Add CoreRec: A Graph-Based Recommendation Engine

Dear CoreNet Team,

I am writing to propose the addition of a new recommendation engine, CoreRec, to the CoreNet repository/technology. CoreRec is a cutting-edge recommendation engine specifically designed for graph-based algorithms. It seamlessly integrates advanced neural network architectures and excels in node recommendations, model training, and graph visualizations.

Key Features of CoreRec:

  • GraphTransformer Model: A Transformer model tailored for graph data with customizable parameters.
  • GraphDataset: A PyTorch dataset for efficient handling of graph data.
  • Training Functionality: Comprehensive training functions for various graph-based machine learning models.
  • Prediction Capability: Accurate prediction of similar nodes within a graph.
  • Graph Visualization: Robust 2D and 3D graph visualization tools.

Benefits of Including CoreRec in CoreNet:

  • Enhanced Recommendation Capabilities: Leverage advanced graph algorithms to improve recommendation accuracy.
  • Integration with CoreNet: Seamlessly integrate CoreRec's functionalities with existing CoreNet infrastructure.
  • Community Collaboration: Foster collaboration and innovation within the CoreNet community by providing a state-of-the-art recommendation engine.

Repository URL: CoreRec GitHub Repository

We believe that CoreRec would be a valuable addition to the CoreNet repository/technology and look forward to your feedback and consideration.

Thank you for your time and attention.
Best regards,

Vishesh Yadav
mail
corerec site

Proposal for Streaming HuggingFace Datasets to Optimize Workflow

I hope this message finds you well. I would like to discuss the possibility of adjusting the current codebase to enable streaming of datasets directly from HuggingFace, eliminating the need for downloading them. This enhancement can significantly streamline the workflow, reduce storage requirements, and improve efficiency, especially for users working with limited local storage or in environments where data download speeds are a bottleneck.

Implementing dataset streaming can be achieved by leveraging HuggingFace's datasets library, which supports on-the-fly data access. The modification would involve integrating this functionality into the existing data handling pipeline, ensuring compatibility and seamless transition for current users.

The high-level steps include:

  1. Updating the data loading functions to utilize HuggingFace's load_dataset with streaming enabled.
  2. Ensuring all downstream processes can handle data in a streamed format without requiring local storage.
  3. Conducting thorough testing to verify the integrity and performance of the streamed data pipeline.

If you are interested, I can raise a pull request with the proposed changes for your review. This would allow us to collaboratively refine and integrate this feature into the project.

Looking forward to your thoughts on this.
Best regards,

Vishesh Yadav;

Import "corenet.internal.cli.entrypoints" could not be resolvedPylancereportMissingImports

I'm following the train_a_new_model_on_a_new_dataset_from_scratch.ipynb and used python3.11 -m venv venv && source venv/bin/activate for virtual env.

I noticed that the corenet doesn't have internal. The correct path would instead be from corenet.cli.entrypoints import entrypoints as internal_entrypoints.

But after using the revised import and running corenet-train --common.config-file projects/playground_cifar10/classification/cifar10.yaml, I got the following error:

[2024-06-16 12:11:38,989] torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not supported in Windows or MacOs.
2024-06-16 12:11:58 - DEBUG    - Cannot load internal arguments, skipping.
2024-06-16 12:12:06 - LOGS    - Random seeds are set to 0
2024-06-16 12:12:06 - LOGS    - Using PyTorch version 2.2.2
2024-06-16 12:12:06 - WARNING - No GPUs available. Using CPU
2024-06-16 12:12:06 - LOGS    - Setting --ddp.world-size the same as the number of available gpus.
2024-06-16 12:12:06 - LOGS    - Directory created at: results/run_1
  File "/Users/friedahuang/Desktop/corenet/venv/bin/corenet-train", line 8, in <module>
    sys.exit(main_worker())
  File "/Users/friedahuang/Desktop/corenet/corenet/cli/main_train.py", line 36, in main_worker
    launcher(callback)
  File "/Users/friedahuang/Desktop/corenet/corenet/train_eval_pipelines/default_train_eval.py", line 292, in <lambda>
    return lambda callback: callback(self)
  File "/Users/friedahuang/Desktop/corenet/venv/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
    return f(*args, **kwargs)
  File "/Users/friedahuang/Desktop/corenet/corenet/cli/main_train.py", line 27, in callback
    train_sampler = train_eval_pipeline.train_sampler
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/functools.py", line 1001, in __get__
    val = self.func(instance)
  File "/Users/friedahuang/Desktop/corenet/corenet/train_eval_pipelines/default_train_eval.py", line 102, in train_sampler
    _, _, train_sampler = self._train_val_loader_sampler
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/functools.py", line 1001, in __get__
    val = self.func(instance)
  File "/Users/friedahuang/Desktop/corenet/corenet/train_eval_pipelines/default_train_eval.py", line 87, in _train_val_loader_sampler
    return create_train_val_loader(opts)
  File "/Users/friedahuang/Desktop/corenet/corenet/data/data_loaders.py", line 97, in create_train_val_loader
    train_dataset, valid_dataset = get_train_val_datasets(opts)
  File "/Users/friedahuang/Desktop/corenet/corenet/data/datasets/__init__.py", line 124, in get_train_val_datasets
    train_dataset = build_dataset_from_registry(
  File "/Users/friedahuang/Desktop/corenet/corenet/data/datasets/__init__.py", line 83, in build_dataset_from_registry
    dataset = DATASET_REGISTRY[dataset_name, dataset_category](
  File "/Users/friedahuang/Desktop/corenet/corenet/utils/registry.py", line 133, in __getitem__
    logger.error(temp_str + "\n")
  File "/Users/friedahuang/Desktop/corenet/corenet/utils/logger.py", line 46, in error
    traceback.print_stack()
2024-06-16 12:12:06 - ERROR   - 
classification:cifar10 not yet supported in dataset_registry registry.
Supported values are:
         0: classification:imagenet_a
         1: classification:imagenet_r
         2: classification:imagenet_sketch
         3: audio_classification:speech_commands_v2
         4: classification:coco
         5: classification:imagenet
         6: classification:imagenet_v2
         7: classification:places365
         8: classification:wordnet_tagged_classification
         9: detection:coco
         10: detection:coco_mask_rcnn
         11: detection:coco_ssd
         12: language_modeling:commonsense_170k
         13: language_modeling:general_lm
         14: multi_modal_image_text:flickr
         15: multi_modal_image_text:img_text_tar
         16: segmentation:ade20k
         17: segmentation:coco
         18: segmentation:coco_stuff
         19: segmentation:pascal
. Exiting!!!

For context, I'm using Apple M2 Pro

Streaming HuggingFace Datasets

Hi, is there a possibility to adjust the code such that downloading the datasets is not necessary but streaming HuggingFace datasets.
Even high-level guidelines would be nice!

A HF/Docker/Modal reproducible training/inference example

Considering that it depends on specific torch (torch==2.2.1) and possibly CUDA, many MacBooks won't be able to run some of the examples. If you want to run tests and notebooks, you'll need lfs and so on - so it becomes an infra nightmare.

  1. Is there any plan to create a template for training/inference on Docker / Modal.com, using say pytorch/pytorch:2.2.1-cuda12.1-cudnn8-devel?
  2. Is there any plan to create a HuggingFace space on at least one of the 10+ demos?
  3. I see that pip install with mlx support already requires huggingface_hub. Is there a reason why?

Corenet detect M2 GPU

I run the "train_a_new_model_on_a_new_dataset_from_scratch" notebook on my M2 and no GPUs are detected, the script moves to using CPUs. Is it possible to configure Corenet to detect the M2 GPU?

torchtext version issue

@team,

my Mac Specification - Mac M3 Max 128GB RAM and 4TB Storage.

I cloned the repo and trying to install below error occurs

ERROR: Could not find a version that satisfies the requirement torchtext==0.17.1 (from corenet) (from versions: 0.1.1, 0.2.0, 0.2.1, 0.2.3, 0.3.1, 0.4.0, 0.5.0, 0.6.0, 0.16.2, 0.17.2, 0.18.0)
ERROR: No matching distribution found for torchtext==0.17.1

I am able to get the torchtext 0.18.0 but again it is not working.

Why use interpreted language ?

Why use interpreted language ? An interpreted language is slower and less powerful. You should use a compiled language such as c++.

For example I had work on [this project] in c++.

You should definitely stop using interpreted language.

Instruct Template

Hi there, looking at OpenELM instruct template what I understood is that the template is something like this:

<|system|>
You are a helpful AI assistant.
<|user|>
How to explain Internet for a medieval knight?
<|assistant|>

Is that right? With MLX the model is not properly working on instruct mode.

Do OpenELM's training datasets contain copyrighted material?

I'm very excited about the release of this model and the efforts the team went through to openly document seemingly every aspect of it. Thank you!

I wonder if any information can be given concerning the selection of training datasets. On https://machinelearning.apple.com/research/openelm it says:

our release includes the complete framework for training and evaluation of the language model on publicly available datasets

More specifically, on https://github.com/apple/corenet/blob/main/projects/openelm/README-pretraining.md it says:

OpenELM was pretrained on public datasets. Specifically, our pre-training dataset contains RefinedWeb, PILE, a subset of RedPajama, and a subset of Dolma v1.6.

Digging into RefinedWeb on https://huggingface.co/datasets/tiiuae/falcon-refinedweb/viewer/default/train?q=nytimes.com, it contains content from sources like nytimes.com and cnn.com.

Screenshot 2024-05-01 at 11 01 57 AM

This is not surprising: Because of the vast amounts of data needed to train LLMs (basically a snapshot of the internet), all training datasets will contain copyrighted material. LLMs are sort of a snapshot of humanity’s knowledge. References to copyrighted characters like Superman, Captain Kirk, Donald Duck, Bugs Bunny etc etc are part of that collective knowledge and references to them might pop up just about anywhere in a dataset. Getting a snapshot of humanity’s knowledge that is free of such references would be as impossible as removing the sugar from a cake after it has been baked.

So while the project only mentions "publicly available datasets" and never makes any claims to be "free of copyrighted material", can any information be shared about the selection process that went into choosing the datasets that were used to train OpenELM?

Request for Access to OpenELM Training Logs

Hi CoreNet team,

Thank you for your fantastic work on CoreNet and the OpenELM model. I've been trying to understand ML infrastructure reliability, and I'm particularly interested in analyzing the training logs that were mentioned in your OpenELM paper.

image

However, I've searched online and couldn't find these logs available anywhere. Could you please guide me on how to access these logs, or consider making them available if possible?

Access to these logs would be incredibly beneficial for my research, and I believe it could also be valuable for the community interested in understanding the nuances of ML model training at scale.

Thank you for your time and consideration.

CatLIP checkpoints on the hub 🤗

Hey hey! - I'm VB, I work on the open source team at Hugging Face. Massive congratulations on the OpenELM release, it's quite refreshing to see such a brilliant open release from Apple.

I was going through the trained checkpoints, and wasn't able to find CatLIP checkpoints. It'd be nice if you can upload it to Hugging Face similar to OpenELM checkpoints.

Let me know if you need a hand with that.

Cheers!
VB

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.