Giter Club home page Giter Club logo

Comments (11)

coyotte508 avatar coyotte508 commented on June 5, 2024 1

Hi @hpssjellis , here are new tasks added, by @vvmnnnkv :

  • visualQuestionAnswering
  • documentQuestionAnswering
  • textToSpeech
  • tabularRegression
  • tabularClassification
  • zeroShotImageClassification
  • audioToAudio

There are also some examples here: https://huggingface.co/huggingfacejs

from huggingface.js.

hpssjellis avatar hpssjellis commented on June 5, 2024

Any chance of showing how to access http://huggingface.co/chat using just form input, perhaps a user TokenID and put the output onto the webpage. I have a live demo here that as of July27th, does nothing.

https://hpssjellis.github.io/my-examples-of-huggingfacejs/public/chat/hugchat00.html

Some code like:

<script>
async function huggingchat(mySentPrompt, mySentID){   
  document.getElementById('myOutput').innerHTML = await  myHug(mySentPrompt, mySentID)  // HOW TO SEND THIS DATA AND GET A REPLY
}
</script>

<h6> version 0.1.2-5</h6>
Prompt: <input type="text" value="Draw an ASCII dog" id="myPrompt"> <br>
Token ID(optional): <input type="password" value="" id="myTokenID"> <br>

Output: <span id="myOutput">...</span> <br>

<input type="button" value="Huggingface Chat" onclick="{  
   let myPrompt = parseInt(document.getElementById('myPrompt').value);  
   let myID = parseInt(document.getElementById('myTokenID').value);  
   huggingchat(myPrompt, myID)
}"> <br>

from huggingface.js.

coyotte508 avatar coyotte508 commented on June 5, 2024

Hugging Chat is open source already, and https://huggingface.co/spaces/huggingfacejs/streaming-text-generation shows how to do streaming text generation.

For chatting models, you can use streaming text generation with the correct model & preprompt, and you can find those in Hugging Chat's source code.

from huggingface.js.

hpssjellis avatar hpssjellis commented on June 5, 2024

Awesome @coyotte508 Thank you so much, that helped make a static webpage that works and is easy to understand. One last step. It looks like meta-llama/Llama-2-70b-chat-hf is behind a paywall, not really what I was expecting. Today update, it looks like the Meta LLM is not even available.

What is the best LLM chat that huggingface supports for token or non-token users?

My simplified demo here https://hpssjellis.github.io/my-examples-of-huggingfacejs/public/chat/hugchat00.html

Here is my latest code:

<!-- polyfill for firefox + import maps   <script src="https://unpkg.com/[email protected]/dist/es-module-shims.js"></script> -->
<script type="importmap">
   {"imports": {"@huggingface/inference": "https://cdn.jsdelivr.net/npm/@huggingface/[email protected]/+esm"}}
</script>


<h6> version 0.2.2-12</h6>
Running model: <span id="myModel">...</span> <br>
Prompt: <input type="text" value="What is a large language model" id="myPrompt"> <br>
Token ID(optional): <input type="password" value="" id="myTokenID"> <br>

Output: <span id="myOutput">...</span> <br>

<input type="button" value="Huggingface Chat" onclick="{  
   launch()
}"> <br>


<script type="module">
import { HfInference } from "@huggingface/inference";
let running = false;
async function launch() {
	if (running) {
		return;
	}
	running = true;
	try {
		const hf = new HfInference(
			document.getElementById("myTokenID").value.trim() || undefined
		);
		const model = "google/flan-t5-xxl";  //document.getElementById("model").value.trim();  // set this
		document.getElementById("myModel").innerHTML = `<b>${model}</b>`;
		
		const prompt = document.getElementById("myPrompt").value.trim();
		document.getElementById("myOutput").innerHTML = "";
		for await (const output of hf.textGenerationStream({
			model,
			inputs: prompt,
			parameters: { max_new_tokens: 250 }
		}, {
			use_cache: false
		})) {
			document.getElementById("myOutput").innerHTML += output.token.text;
		}
	} catch (err) {
		alert("Error: " + err.message);
	} finally {
		running = false;
	}
}
window.launch = launch;
</script>

from huggingface.js.

hpssjellis avatar hpssjellis commented on June 5, 2024

It is looking like a person needs to sign this meta permission to download the Llamav2 models

https://ai.meta.com/resources/models-and-libraries/llama-downloads/

Explained here https://huggingface.co/meta-llama

I don't want to download it I just want simple javascript access to it. Back to the latest question, in your opinion, @coyotte508 or @radames what is the best Huggingface chat model presently allowed to access for open Source Javascript testing?

from huggingface.js.

coyotte508 avatar coyotte508 commented on June 5, 2024

If you get access you also get to use it with the inference API.

You can try https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5 otherwise, make sure to give it a high max_new_tokens param so it can generate longer responses.

from huggingface.js.

hpssjellis avatar hpssjellis commented on June 5, 2024

This page shows some more info

https://medium.com/@TitanML/the-easiest-way-to-fine-tune-and-inference-llama-2-0-8d8900a57d57

from huggingface.js.

hpssjellis avatar hpssjellis commented on June 5, 2024

If you get access you also get to use it with the inference API.

You can try https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5 otherwise, make sure to give it a high max_new_tokens param so it can generate longer responses.

@coyotte508 that is very useful, now I am curious about formating the outputted generated text on the innerHTML. I found this "Two special tokens are used to mark the beginning of user and assistant turns: <|prompter|> and <|assistant|>. Each turn ends with a <|endoftext|> token." But it doesn't really tell me about possible generated code that is going to help with linebreaks, code generation, and any other formatting. I will mess around with JSON.stringify and RegEx but any suggestions?

Your "max_new_tokens" suggestion is very useful, is there a list of other default variables?

from huggingface.js.

coyotte508 avatar coyotte508 commented on June 5, 2024

https://github.com/huggingface/chat-ui/blob/54e8a52e285c9f3997d9d7550420da821cdde219/.env#L24 you have all the parameters there.

The model will mostly generate markdown, you can parse the text between the special tokens as markdown. (eg using marked or another NPM library)

from huggingface.js.

hpssjellis avatar hpssjellis commented on June 5, 2024

https://github.com/huggingface/chat-ui/blob/54e8a52e285c9f3997d9d7550420da821cdde219/.env#L24 you have all the parameters there.

The model will mostly generate markdown, you can parse the text between the special tokens as markdown. (eg using marked or another NPM library)

NICE! both the markdown and the link to parameters https://github.com/huggingface/chat-ui/blob/54e8a52e285c9f3997d9d7550420da821cdde219/.env#L46-L52 are great:

Can someone check my explanations for these parameters. I will have to do some trial and error. Does anyone know the direction and maximums of these values?

    "parameters": {
      "temperature": 0.9,           // control the randomness of predictions
      "top_p": 0.95,                    // control the range of tokens based on their cumulative probability
      "repetition_penalty": 1.2,   // discounts the probability of tokens that already appeared 
      "top_k": 50,                       // top list of number of words to choose from
      "truncate": 1000,               // ?????
      "max_new_tokens": 1024  // max number of new tokens added to the output
    }

I should be able to do a ton with this information.

from huggingface.js.

hpssjellis avatar hpssjellis commented on June 5, 2024

Thanks for your help. I will close this issue until I find some time to work on more examples. The hugging chat works fine at https://hpssjellis.github.io/my-examples-of-huggingfacejs/public/chat/hugchat00.html I am just trying to figure out how to explain the parameters.

from huggingface.js.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.