Giter Club home page Giter Club logo

grammy's People

Contributors

abdoo9 avatar ak4zh avatar allcontributors[bot] avatar dcdunkan avatar dzek69 avatar edjopato avatar fwqaaq avatar glukki avatar grammyz avatar knightniwrem avatar knorpelsenf avatar kospra avatar kraftwerk28 avatar loskir avatar mrvsik avatar niusia-ua avatar oott123 avatar outloudvi avatar ponomarevlad avatar rojvv avatar roziscoding avatar satont avatar shaunnope avatar shevernitskiy avatar ssistoza avatar tonytkachenko avatar vitorleo avatar waptik avatar winstxnhdw avatar wojpawlik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grammy's Issues

How to prevent bot from crashing in case of some error? Looks like bot.catch() doesn't work for me

My code is look like:

bot.catch((err) => {
	const ctx = err.ctx;
	console.error(`Error while handling update ${ctx.update.update_id}:`);
	const e = err.error;
	console.log( `Some error was catch: `, e );
});
run( bot );

// ... in 5 seconds after bot has started:
try {
	bot.api.sendMessage( -10000000000, `some text` );
} catch ( e ) {
	console.log( `Some error was catch: `, e );
}

After starting a bot I try to send a message to a chat from where my bot was removed:

/home/user/node_modules/grammy/out/core/client.js:110
            throw new error_js_1.GrammyError(`Call to '${method}' failed!`, data, method, payload);
                  ^

GrammyError: Call to 'sendMessage' failed! (403: Forbidden: bot was kicked from the supergroup chat)
    at ApiClient.callApi (/home/user/node_modules/grammy/out/core/client.js:110:19)
    at processTicksAndRejections (node:internal/process/task_queues:96:5) {
  method: 'sendMessage',
  payload: { chat_id: -10000000000, text: 'some text' },
  ok: false,
  error_code: 403,
  description: 'Forbidden: bot was kicked from the supergroup chat',
  parameters: {}
}

Node.js v17.0.1

I was expected that bot shouldn't crash in this case of error.
Is there a way to prevent bot from crashing if I try to send to chat to where I'm not allowed to send a message?

Help to guidance for migrate from Telegraf 4 BaseScene

Hi all.
I want to say thanks to all grammY developers for this superb and excellent documented library!

I'm currently using telegraf 4 baseScene as usual ctx.scene.enter, ctx.scene.leave, and inline keyboard callback/action scene, and so on...

So, how is the better way to migrate scenes logic to grammY?

Regards!

Help request: Nest decorators

I'm in the process of porting nest-telegraf to grammy. I like where this is going, but I'm struggling with a specific piece I could use some help with (or guidance if this is a lost cause):

I'm struggling with their Create Listener decorator. Here's how they use it:

export const On = createListenerDecorator('on');
// or
export const Hears = createListenerDecorator('hears');
// or 
export const Command = createListenerDecorator('command');

I'd love some assistance in getting this sorted out, so I could use it properly in my code. The goal is to be able to do:

  @Start()
  onStart(): string {
    return 'Say hello to me';
  }

  @Hears(['hi', 'hello', 'hey', 'qq'])
  onGreetings(
    @UpdateType() updateType: TelegrafUpdateType,
    @Sender('first_name') firstName: string,
  ): string {
    return `Hey ${firstName}`;
  }

Weird reaction on InputFile with path that does not exist

When I pass inexistent path to InputFile grammY probably still tries to read it. This simple code:

    bot.on("message", (ctx) => {
        console.log("start", new Date().toISOString());
        ctx.replyWithVideo(new InputFile("/tmp/this.does.not.exist")).catch(e => {
            console.error(e);
            console.error("end", new Date().toISOString());
        });
    });

Results in:

node             | start 2021-09-25T10:27:57.567Z
node             | HttpError: Network request for 'sendVideo' failed!
node             |     at ApiClient.call (/home/node/app/node_modules/grammy/out/core/client.js:55:27)
node             |     at processTicksAndRejections (node:internal/process/task_queues:96:5)
node             |     at ApiClient.callApi (/home/node/app/node_modules/grammy/out/core/client.js:68:22) {
node             |   error: FetchError: request to https://api.telegram.org/xxxxxxxxxxxxxxxxxxxx/sendVideo failed, reason: socket hang up
node             |       at ClientRequest.<anonymous> (/home/node/app/node_modules/node-fetch/lib/index.js:1461:11)
node             |       at ClientRequest.emit (node:events:402:35)
node             |       at ClientRequest.emit (node:domain:475:12)
node             |       at TLSSocket.socketOnEnd (node:_http_client:471:9)
node             |       at TLSSocket.emit (node:events:402:35)
node             |       at TLSSocket.emit (node:domain:475:12)
node             |       at endReadableNT (node:internal/streams/readable:1343:12)
node             |       at processTicksAndRejections (node:internal/process/task_queues:83:21) {
node             |     type: 'system',
node             |     errno: 'ECONNRESET',
node             |     code: 'ECONNRESET'
node             |   }
node             | }
node             | end 2021-09-25T10:28:57.618Z

[Bug?] Webook reply does not work without explicitly setting `Content-Type: application/json`

Just following grammy's guide resulted in no responses when using webhook replies.

I traced the problem down to grammy handing back a JSON string without setting the header. To work around this, I had to add a middleware to set the Content-Type:

app.use(async (ctx, next) => {
  ctx.set("Content-Type", "application/json");
  await next();
});

// Make bot callable via webhooks
app.use(webhookCallback(bot, "koa"));

This problem occurred for both express and koa.

Trying to run simple example but it failed

My bot1.ts:

import { Bot } from "grammy";
const bot = new Bot( "token" );
bot.command( "start", ( ctx ) => ctx.reply( "Welcome! Up and running." ) );
bot.on( "message", ( ctx ) => ctx.reply( "Got another message!" ) );
bot.start();

Then:

» npx tsc

» node bot1.js
/home/user/nodejs/node_modules/grammy/out/core/client.js:110
            throw new error_js_1.GrammyError(`Call to '${method}' failed!`, data, method, payload);
                  ^

GrammyError: Call to 'getMe' failed! (404: Not Found)
    at ApiClient.callApi (/home/user/nodejs/node_modules/grammy/out/core/client.js:110:19)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async Bot.init (/home/user/nodejs/node_modules/grammy/out/bot.js:167:24)
    at async Bot.start (/home/user/nodejs/node_modules/grammy/out/bot.js:251:9) {
  method: 'getMe',
  payload: undefined,
  ok: false,
  error_code: 404,
  description: 'Not Found',
  parameters: {}
}

Node.js v17.0.1

I installed grammy via: npm i grammy.
What have I missed?

Can't compile simple example

bot1.ts:

import { Bot } from "grammy";
» tsc bot1.ts
../../node_modules/@types/express-serve-static-core/index.d.ts:584:18 - error TS2430: Interface 'Response<ResBody, Locals, StatusCode>' incorrectly extends interface 'ServerResponse'.
  Property 'req' is optional in type 'Response<ResBody, Locals, StatusCode>' but required in type 'ServerResponse'.
584 export interface Response<
                     ~~~~~~~~

../../node_modules/@types/express/index.d.ts:58:55 - error TS2344: Type 'Response<any, Record<string, any>>' does not satisfy the constraint 'ServerResponse'.
  Property 'req' is optional in type 'Response<any, Record<string, any>>' but required in type 'ServerResponse'.
58     var static: serveStatic.RequestHandlerConstructor<Response>;
                                                         ~~~~~~~~

../../node_modules/grammy/out/core/client.d.ts:1:23 - error TS2688: Cannot find type definition file for 'node-fetch'.
1 /// <reference types="node-fetch" />
                        ~~~~~~~~~~
Found 3 errors.
» tsc -v     
Version 4.4.4

» node -v
v17.0.1

What's wrong with my config?

bug: Cross-level twin typing

The filter queries have a feature called twin properties, which will make some properties guaranteed if other ones are present. For example, animation messages always have the document property specified, and the filter queries account for that.

So far, they only work on L2. However, there are fields such as from (only absent in channel post messages) or edit_date (only absent for new messages) which need to inspect L1 to in order to make an assumption about L2 properties. In other words, we cannot simply assume that the twin properties can be read from a flat list. Instead, we need to to build up nested objects structures that can be merged (intersected) into the result type in order to support twins that have dependencies across several levels.

Is there a way to send message to only one chat?

I've added my bot to some public chats (type: supergroup). How to specify a chat for sending a message to just one chat only?
Do I need to store all contextes and call ctx.reply() for all contextes?
How to store valid cotextes only?

feat: Multiple sessions

It would be nice to able to connect to multiple sessions at the same time. The problem with static typing that prevented this from being implemented can be solved by requiring an exhaustive list of properties upon session plugin instantiation. This will also lead to code with higher concurrency.

Session key is undefined on payment.

Hello. It looks as if the session key cannot be resolved for payment.
This happens after I click "pay 120 rub" in the dialog box from telegram, after I filled my card on a third-party service (UKassa)

image

Middlewres:

session:

export const middleware = () =>
  session({
    initial: () => ({
      user: {
        isRegistered: false,
      },
    }),
    storage,
  });

i18n:

export const middleware = () =>
  useFluent({
    fluent,
    localeNegotiator: (ctx: Context) =>
      ctx.session.user.languageCode || ctx?.from?.language_code,
  } as GrammyFluentOptions & {
    localeNegotiator: LocaleNegotiator<Context>;
  });

Also composer code, if it will be helpful:

composer.command('test', async ctx => {
await ctx.replyWithInvoice(
        ctx.t('subscription_fill_up_balance_title', { count }),
        ctx.t('subscription_fill_up_balance_title', { count }),
        JSON.stringify({
          unique_id: `${ctx.from!.id}_${Number(new Date())}`,
          provider_token: config.BOT_PAYMENT_PROVIDER_TOKEN,
        }),
        config.BOT_PAYMENT_PROVIDER_TOKEN,
        'RUB',
        [{ label: count.toString(), amount: count * 100 }]
      );
})

composer.on('pre_checkout_query', (ctx) => ctx.answerPreCheckoutQuery(true));
composer.on(':successful_payment', async (ctx, next) => {
  await ctx.reply('SuccessfulPayment');
});

Stack:

Error while handling update 479364603:
Unknown error: Error: Cannot access session data because the session key was undefined!
    at Context.get (/home/satont/Projects/funpay/apps/bot/node_modules/grammy/out/convenience/session.js:67:27)
    at localeNegotiator (/home/satont/Projects/funpay/apps/bot/src/middlewares/setup-i18n.middleware.ts:14:11)
    at negotiateLocale (/home/satont/Projects/funpay/apps/bot/node_modules/@moebius/grammy-fluent/src/middleware.ts:95:15)
    at fluentMiddleware (/home/satont/Projects/funpay/apps/bot/node_modules/@moebius/grammy-fluent/src/middleware.ts:81:11)
    at /home/satont/Projects/funpay/apps/bot/node_modules/grammy/out/composer.js:61:41
    at /home/satont/Projects/funpay/apps/bot/node_modules/grammy/out/composer.js:71:19
    at /home/satont/Projects/funpay/apps/bot/src/middlewares/setup-logger.middleware.ts:16:10
    at /home/satont/Projects/funpay/apps/bot/node_modules/grammy/out/composer.js:61:41
    at /home/satont/Projects/funpay/apps/bot/node_modules/grammy/out/composer.js:71:19
    at AsyncLocalStorage.run (node:async_hooks:320:14)

Node: 16
Grammy: 1.6.2

feat: Enumerate all sessions

I'm thinking about allowing

for await (const data of ctx.session) {
  // handle session data of user
}

which would effectively allow you to enumerate all sessions.

This would be optional to implement for storage adapters. Hence, it is a non-breaking change. Storage adapters that do not support this will simply throw an error when this is attempted.

Originally posted by @KnorpelSenf in #130 (comment)

Shortcut for adding a middleware subtree

Let's say I want to build a middleware tree:

∟ filter private chat
  ∟ filter text
  ∟ filter sticker
∟ filter group chat
  ∟ filter text
  ∟ filter sticker

If I understand correctly, that's how one achieves that:

const privateChatComposer = new Composer<BotContext>()
privateChatComposer.on("message:text", ...)
privateChatComposer.on("message:sticker", ...)

const groupChatComposer = new Composer<BotContext>()
groupChatComposer.on("message:text", ...)
groupChatComposer.on("message:sticker", ...)

bot.filter((ctx) => ctx.chat?.type === "private", privateChatComposer)
bot.filter((ctx) => ctx.chat?.type === "group", groupChatComposer)

I propose to create a helper such as Composer.create (or perhaps a global compose function) which allows to make the code above more concise:

bot.filter((ctx) => ctx.chat?.type === "private", Composer.create(bot => {
  bot.on("message:text", ...)
  bot.on("message:sticker", ...)
}))
bot.filter((ctx) => ctx.chat?.type === "group", Composer.create(bot => {
  bot.on("message:text", ...)
  bot.on("message:sticker", ...)
}))

implemented similarly to:

import { Composer, Context } from "grammy"

// Sample implementation as a global function.
// Could as well be a static method on Composer.
export function compose<C extends Context>(setup: (composer: Composer<C>) => void) {
  const composer = new Composer<C>()
  setup(composer)
  return composer.middleware()
}

UPDATE: based on the discussion below and prototyping, my current suggestion is:

bot.filter((ctx) => ctx.chat?.type === "private").setup(privateChat => {
  privateChat.on("message:text", ...)
  privateChat.on("message:sticker", ...)
})

Implemented like this:

export class Composer<C extends Context> {
  /** Run the provided setup function against the current composer. */
  setup(setup: (composer:this) => void) {
    setup(this)
    return this
  }
}

See monkey patch: https://github.com/IlyaSemenov/grammy-scenes/blob/87b5fd0b9153436bcf592bd47ff0e4e9bc8c208d/src/monkey_patches/composer_setup.ts

fix: Sending messages after chat join requests

https://telegram.org/blog/protected-content-delete-by-date-and-more introduced the ability for bots to proactively send a message to users when they request becoming a member of a chat.

Currently, grammY does not support getting the user identifier via ctx.from.id in that case. It is also not possible to send messages to the group upon incoming group requests. It would be neat if we could simply reply to chat join requests, and this sends a message to the respective user.

Suggestion:

// Define admission test
const menu = new Menu('life-universe-everything')
menu.text('13', ctx => ctx.declineChatJoinRequest(ctx.from.id))
menu.text('42', ctx => ctx.approveChatJoinRequest(ctx.from.id))
menu.text('1337', ctx => ctx.declineChatJoinRequest(ctx.from.id))
bot.use(menu)

// Send admission tests, and notify group
bot.on('chat_join_request', async ctx => {
  await ctx.api.sendMessage(ctx.from.id, 'Hi! What is the answer?', { reply_markup: menu })
  await ctx.reply(`I contacted ${ctx.from.username ?? ctx.from.first_name} to see if they may join!`)
})

How api.sendDocument work ?

Hi, with telegraf, it was possible to change the name of the file with:
bot.telegram.sendDocument(chat_id, { source: `name.json`, filename: `other_name.json`}, [{ disable_notification: true }] );
How to do this with grammyY, I only found this:
bot.api.sendDocument(chat_id, new InputFile('name.json'))

Compatibility with Deno deploy

Hi, cool project!

I tried using grammY in a Deno deploy function and got the error
TypeError: cannot read property 'query' of undefined grammy Deno in this line.

The line is let res = await Deno.permissions.query(env) and as I understand it the problem is that Deno deploy does not expose the Deno.permissions API.

Not sure if this is something worth/possible doing anything about, but it would be very nice to be able to use this framework for deploying telegram bots on Deno deploy.

Thank you for the framework.

Is there a way to handle live videos feeds?

Is there a way to handle live videos feeds?

I see there is a method to handle videos files, but I would like to know if there is a way to handle live video feeds. My usecase , I'm trying to identify objects from the live video feeds.

Grammy.dev 502 API reference

First of all thank you a lot for this tool!

There's a problem with the deployed website, you might already know about.

If you go to grammy.dev and click API Reference - the link to https://doc.deno.land/https://deno.land/x/grammy/mod.ts
will be broken and show you 502 - DEPLOYMENT_FAILED.

I'm not sure if it's on your end or Deno deploy. Yet, that would be nice to look into.

Thank you!

feat: Filtering for sender type

There are five different possible types of message authors.

  • users
  • channel post authors
  • admins sending on behalf of the group (anonymous admins)
  • automatic forwards from channels
  • users sending as one of their channels

In each case, the combination of the values ctx.msg.chat.id, ctx.msg.is_automatic_forward, and ctx.msg.sender_chat.id is different.

It could be cool to not only have #105 but also something like

bot.senderType('user') // all regular user messages (includes bots)
bot.senderType(['channel-post', 'anonymous-admin']) // channel admins in channel or group
// etc

The exact strings should probably be different, please suggest something.

Does anyone need this?

Roadmap 1.10: Conversational Interfaces

This is a list of tasks that should be completed before the 1.10 release. All of them focus on improving how well the core library is suited to be used in conversations.

This list is tentative. Things may be added or removed if we figure that that makes sense.

There is no ETA—it will be released once everything is done. It goes faster if you contribute.

If you want to help out but you don't know how to decide, please leave a comment here. If you want to work on some issue but you don't know how to begin, please leave a comment there. In any case, you can join the community chat and ask.

feat: InputFile from supplier functions

Passing an iterator to InputFile means that the respective API call cannot be repeated. This is because the iterator may already be (partially) consumed, hence corrupting the file data.

Supplier functions could come to the rescue:

new InputFile(() => createNewIterator())

would allow users to pass a function that can return any of the values that are currently supported. This function will be invoked once per API call attempt.

Plugin Ideas

Here is a list of ideas that could be turned into plugins.

You can suggest new ideas in the comments or in the group chat, and we may add them here. Once someone publishes an official plugin in version 1.0, we will remove the idea from this list. You can also decide to start working on a third-party plugin. In that case, you also need to link it on https://grammy.dev. You should be able to demonstrate that your plugin is working.

grammy-dedupe

Deduplicates updates delivered via webhooks. If users follow https://grammy.dev/guide/deployment-types.html#how-to-use-webhooks then no duplicates should appear, but advanced use cases may still require this.

  • Maybe based on Redis, but also supports storing in RAM?
  • Store lower bound of update identifiers
  • Store set of currently handled updates larger than that with their promises
  • New update: if it is lower than bound: drop, if it is in set: drop but await stored promise, else store next()'s promise behind identifier

grammy-bulk

Helps sending messages to many people, or performing other bulk operations.

  • Must survive crashes and restarts while sending
  • Must stay without flood limits at all times, maybe even dynamically adjusting to the bot's organic load

grammy-telegraph

Helps sending long messages.

  • Covers the Telegraph API
  • Allows sending long text messages by automatically creating an article from them, and sending the link instead

awilix-grammy

Helps creating complex code bases.

grammy-test

Helps writing tests for bots.

UI components based on web apps

grammy-telemetry

  • collects basic usage data about the bot
  • exports markers that can be installed in the middleware tree to easily track feature usage
  • supports custom telemetry events though the context object
  • integrates with other services like Prometheus or InfluxDB
  • is able to use storage adapters for persistence

grammy-util

grammy-progress

  • lets you display the progress of something inside a message
  • as a core primitive, has a transformer that allows you to edit a message at any rate, and it will debounce the calls internally upon rate limits, so that only the most recent edit is performed
  • performs edits as fast as possible and as slow as necessary
  • works on a chat_id/message_id basis
  • supports all methods with this property (editMessageText, editMessageCaption, editMessageMedia, editMessageLiveLocation, editMessageReplyMarkup)
  • provides utilities for rendering progress bars, so you can do
    const progress = ctx.progress(chat_id, message_id, "style")
    await progress.set(0.3) // 30 %

grammy-replies

  • improves support for replies
  • helps with support for the following things:
    • Which entities does the message that is being replied to have?
    • Does the message that is being replied to have a /start command?
    • Is the message that is being replied to an image?
    • Does the message that is being replied to have a caption that matches a regular expression?

grammy-fsm

  • FSM code is hard to read
  • Ideally, we want to view a nice flow chart diagram
  • Diagrams sucks to create with a diagram editor
  • We need to solve the problem of writing code and reading diagrams
    • Create a state machine library
    • Create a VS Code extension that parses TS files that import this library
    • Take the code behind https://rtsys.informatik.uni-kiel.de/elklive/elkgraph.html
    • Put it into the extension, and provide a live preview of the FSM that updates in real time while people type their state machine
  • create a grammY plugin that can import such FSM code, and translate it into a conversational interface library

Project doesn't compile on Deno 1.20.3

How to reproduce:

deno upgrade
deno cache "https://deno.land/x/[email protected]/mod.ts"

Version:

deno --version                                                                                                                                    
deno 1.20.3 (release, x86_64-unknown-linux-gnu)
v8 10.0.139.6
typescript 4.6.2

Logs:

Check https://deno.land/x/[email protected]/mod.ts
error: TS2769 [ERROR]: No overload matches this call.
  Overload 1 of 2, '(input: string | Request, init?: RequestInit | undefined): Promise<Response>', gave the following error.
    Argument of type 'string | URL' is not assignable to parameter of type 'string | Request'.
      Type 'URL' is not assignable to type 'string | Request'.  Overload 2 of 2, '(input: URL, init?: RequestInit | undefined): Promise<Response>', gave the following error.
    Argument of type 'string | URL' is not assignable to parameter of type 'URL'.
      Type 'string' is not assignable to type 'URL'.
    const { body } = await fetch(url);
                                 ~~~
    at https://deno.land/x/[email protected]/platform.deno.ts:134:34



TS2322 [ERROR]: Type '((root: string, token: string, method: string) => URL) | ((root: string, token: string, method: string) => string)' is not assignable to type '(root: string, token: string, method: string) => URL'.
  Type '(root: string, token: string, method: string) => string' is not assignable to type '(root: string, token: string, method: string) => URL'.
    Type 'string' is not assignable to type 'URL'.
            buildUrl: options.buildUrl ??
            ~~~~~~~~
    at https://deno.land/x/[email protected]/core/client.ts:207:13

    The expected type comes from property 'buildUrl' which is declared here on type 'Required<ApiClientOptions>'
        buildUrl?: (
        ~~~~~~~~
        at https://deno.land/x/[email protected]/core/client.ts:131:5

Found 2 errors.

Broken for TypeScript 4.5

Deno 1.17.0 is released and TypeScript 4.5 is embedded.
grammY is broken in deno 1.17.0 by the Awaited type newly introduced in TypeScript 4.5 (ref).

Reproduction

Just run the command below:

$ deno run --reload https://deno.land/x/grammy/mod.ts

Error message:

error: TS2322 [ERROR]: Type 'S | undefined' is not assignable to type 'Awaited<S> | undefined'.
  Type 'S' is not assignable to type 'Awaited<S> | undefined'.
    Type 'S' is not assignable to type 'Awaited<S>'.
                v = options.initial?.();
                ^
    at https://deno.land/x/[email protected]/convenience/session.ts:273:17

grammY conversations (Why Scenes Are Bad)

Admittedly, that title was clickbait. It seems like you are interested in scenes or wizards, and other programming patters that allow you to define conversational interfaces like they were a finite-state machine (FSM, wiki).

THIS IS A A LOT OF TEXT. Please still read everything before you comment. Let's try to keep the signal-to-noise ratio up high on this one :)

What Is This Issue

One of the features that get requested most often are scenes. This issue shall:

  1. Bring everyone on the same page what people mean when they say scenes.
  2. Why there is not going to be a traditional implementation of scenes for grammY, and how we're trying to do better.
  3. Update you about the current progress.
  4. Introduce two novel competing concepts that could both turn out as better than scenes.
  5. Serve as a forum to discuss where we want to take this library regarding grammY conversations/scenes.

1. What are scenes?

A chat is a conversational interface. This means that the chat between the user and the bot evolves over time. Old messages stay relevant when processing current ones, as they provide the context of the conversation that determines how to interpret messages.

< /start
>>> How old are you?
< 42
>>> Cool, how old is your mother?
< 70
>>> Alright, she was 28 when you were born!

Note how the user sends two messages, and both are numbers. We only know that those two numbers mean two different things because we can follow the flow of the conversation. The two age numbers are following up two different questions. Hence, in order to provide a natural conversational flow, we must store the history of the chat, and take it into account when interpreting messages.

Note that Telegram does not store the chat history for bots, so you should store them yourself. This is often done via sessions, but you can also use your own database.

In fact, we often don't need to know the entire chat history. The few most recent messages are often enough to remember, as we likely don't have to care about what the user sent back in 2018. It is therefore common to construct state, i.e. a small bit of data that stores where in the conversation where are. In our example, we would only need to store if the last question was about the age of the user, or about the age of their mother.

Scenes are a way to express this conversational style by allowing you to define a finite-state machine. Please google what this is, it is essential for the following discussion. The state is usually stored in the session data. They achieve this by isolating a part of the middleware into a block that can be entered and left.

Different bot frameworks have different syntax for this, but it typically works roughly like this (explanatory code, do not try to run):

// Define a separate part of the middleware handling.
const scene = new Scene('my-scene')
scene.command('start', ctx => ctx.reply('/start command from inside scene'))
scene.command('leave', ctx => ctx.scene.leave()) // leave scene

// Define regular bot.
const bot = new Bot('secret-token')
bot.use(session())
bot.use(scene)
bot.command('start', ctx => ctx.reply('/start command outside of scene'))
bot.command('enter', ctx => ctx.scene.enter('my-scene')) // enter scene

bot.start()

This could result in the following conversation.

< /start
>>> /start command outside of scene
< /enter
< /start
>>> /start command from inside scene
< /leave
< /start
>>> /start command outside of scene

In a way, every scene defines one step of the conversation. As you can define arbitrarily many of these scenes, you can define a conversational interface by creating a new instance of Scene for every step, and hence define the message handling for it.

Scenes are a good idea. The are a huge step forward from only defining dozens of handlers on the same middleware tree. Bots that do not use scenes (or a similar form of state management) are effectively forgetting everything that happened in the chat immediately after they're done handling a message. (If they seem like they remember their context, then this is more or less a workaround which relies on a message that you reply to, inline menus, or other information in order to avoid state management.)

2. Cool! So what is the problem?

Scenes effectively reduce the flow of a conversation to being in a state, and then transitioning into another state (ctx.scene.enter('goto')). This can be illustrated by translating scenes into routers:

const scene = new Router(ctx => ctx.session.scene)

// Define a separate part of the middleware handling.
const handler = new Composer()
scene.route('my-scene', handler)
handler.lazy(ctx => {
  const c = new Composer()
  c.command('start', ctx => ctx.reply('/start command from inside scene'))
  c.command('leave', ctx => ctx.session.scene = undefined) // leave scene
  return c
})

// Define regular bot.
const bot = new Bot('secret-token')
bot.use(session())
bot.use(scene)
bot.command('start', ctx => ctx.reply('/start command outside of scene'))
bot.command('enter', ctx => ctx.session.scene = 'my-scene') // enter scene

bot.start()

Instead of creating new Scene objects, we simply create new routes, and obtain the same behaviour with minimally more code.

This may work if you have two states. It may also work for three. However, the more often you instantiate Scene, the more states you add to your global pool of states, between which you're jumping around arbitrarily. This quickly becomes messy. It takes you back to the old days of defining a huge file of code without indentation, and then using GOTO to move around. This, too, works at a small scale, but considering GOTO harmful led to a paradigm shift that substantially advanced programming as a discipline.

In Telegraf, there are some ways to mitigate the problem. For example, once could add a way to group some scenes together into a namespace. As an example, Telegraf calls the Scene from above a Stage, and uses the word scene to group together several stages. It also allows you to force certain stages into a linear history, and calls this a wizard, in analogy to the multi-step UI forms.

With grammY, we try to rethink the state of the art, and to come up with original solutions to long standing problems. Admitting that Update objects are actually pretty complex objects led us to giving powerful tools to bot developers: filter queries and the middleware tree were born, and they are widely used in almost all bots. Admitting that sending requests is more than just a plain HTTP call (at least when you're working with Telegram) led us to developing API transformer functions: a core primitive that drastically changes how we think about plugins and what they can do. Admitting that long polling at scale is quite hard led us to grammY runner: the fastest long polling implementation that exists, outperforming all other JS frameworks by far.

Regarding conversational interfaces, the best we could come up with so far is GOTO. That was an okay first step a few years ago. Now, it is time to admit that this is harmful, and that we can do better.

3. So what have we done about this so far?

Not too much. Which is why this issue exists. So far, we've been recommending people to combine routers and sessions, rather than using scenes, as it does not use much more code, and providing the same plain old scenes for grammY is not ambitious enough.

There is a branch in this repository that contains some experiments with the future syntax that could be used, however, the feedback for it was mixed. It does bring some improvements to the situation as it provides a structure between the different steps in the conversation. Unfortunately, the resulting code is not too readable, and it makes things that belong together end up in different places of the code. It is always cool if the things that are semantically linked can be written close to each other.

As a consequence of this lack of progress, we need to have a proper discussion with everyone in the community in order to develop a more mature approach. The next section will suggest two ideas, one of them is the aforementioned one. Your feedback and ideas will impact the next step in developing conversational interfaces. Please speak up.

4. Some suggestions

Approach A: “Conversation Nodes”

This suggestion is the one the we've mentioned above. Its main contribution is to introduce a more implicit way of defining scenes. Instead of creating a new instance of a class for every step, you can just call conversation.wait(). This will internally create the class for you. As a result, you can have a more natural way of expressing the conversation. The wait calls make it clear where a message from the user is expected.

Here is the example from the top again. Handling invalid input is omitted intentionally for brevity.

const conversation = new Conversation('age-at-birth')

conversation.command('start', async ctx => {
  await ctx.reply('How old are you'))
  ctx.conversation.forward()
})

conversation.wait()
conversation.on('message:text', async ctx => {
  ctx.session.age = parseInt(ctx.msg.text, 10)
  await ctx.reply('Cool, how old is your mother?')
  ctx.conversation.forward()
})

conversation.wait()
conversation.on('message:text', async ctx => {
  const age = parseInt(ctx.msg.text, 10)
  await ctx.reply(`Alright, she was ${age - ctx.session.age} when you were born!`)
  ctx.conversation.leave()
})

This provides a simple linear flow that could be illustrated by

O
|
O
|
O

We can jump back and forth using ctx.conversation.forward(3) or ctx.conversation.backward(5).
The wait calls optionally take string identifiers if you want to jump to a specific point, rather than giving a relative number of steps.

Next, let us see how we can branch out, and have an alternative way of continuing the conversation.

const conversation = new Conversation('age-at-birth')

conversation.command('start', async ctx => {
  await ctx.reply('How old are you'))
  ctx.conversation.forward()
})

conversation.wait()

// start a new sub-conversation
const invalidConversation = conversation.filter(ctx => isNaN(parseInt(ctx.msg.text))).diveIn()
invalidConversation.on('message', ctx => ctx.reply('That is not a number, so I will assume you sent me the name of your pet'))
invalidConversation.wait()
// TODO: continue conversation about pets here

// Go on with regular conversation about age:
conversation.on('message:text', async ctx => {
  ctx.session.age = parseInt(ctx.msg.text, 10)
  await ctx.reply('Cool, how old is your mother?')
  ctx.conversation.forward()
})

conversation.wait()
conversation.on('message:text', async ctx => {
  const age = parseInt(ctx.msg.text, 10)
  await ctx.reply(`Alright, she was ${age - ctx.session.age} when you were born!`)
  ctx.conversation.leave()
})

We have now defined a conversation that goes like this:

O
|
O
| \
O O

That way, we can define conversation flows.

There are a number of improvements that could be done to this. If you have any concrete suggestions, please leave them below.

Approach B: “Nested Handlers”

Newcomers commonly try out something like this.

bot.on('start', async ctx => {
  await ctx.reply('How old are you?')
  bot.on('message', ctx => { /* ... */ })
})

grammY has a protection against this because it would lead to a memory leak, and eventually OOM the server. Every received /start command would add a handler that is installed globally and persistently. All but the first are unreachable code, given that next isn't called inside the nested handler.

It would be worth investigating if we can write a different middleware system that allows this.

const conversation = new Conversation()
conversation.on('start', async ctx => {
  await ctx.reply('How old are you?')
  conversation.on('message', ctx => { /* ... */ })
})

This would probably lead to deeply nested callback functions, i.e. bring us back to callback hell, something that could be called the GOTO statement of asynchronous programming.

What could we do to mitigate this?

Either way, this concept is still tempting. It is very intuitive to use. It obviously cannot be implemented with exactly the above syntax (because we are unable to reconstruct the current listeners on the next update, and we obviously cannot store the listeners in a database), but could try to figure out if small adjustments could make this possible. Internally, we would still have to convert this into something like an FSM, but maybe one that is generated on the fly. The dynamic ranges of the menu plugin could be used as inspiration here.

5. We need your feedback

Do you have a third idea? Can we combine the approaches A and B? How would you change them? Do you think the examples are completely missing the point? Any constructive feedback is welcome, and so are questions and concerns.

It would be amazing if we could find the right abstraction for this. It exists somewhere out there, we just have to find it.

Thank you!

plugin: BotCommandScope builder

Create a new repo with a plugin for advanced command filtering that goes beyond what people usually need. This not only allows users to do cooler things, but it also helps the core package stay focused on other things.

Desired features:

  • should work like menu plugin, but for command filtering
  • argument matching
  • ctx.command property that exposes the invoked command on the context object (useful for what we currently do with bot.command(array) where it is not possible to find out which command was invoked without inspecting the message text)
  • custom command prefixes, such as ! instead of /
  • optional case insensitivity
  • plugin should have the name commands

Feature request: ability to re-upload a file from given url

I want to use for example sendMediaGroup with an URL but I don't want grammy to pass the URL to Telegram but instead in the background download the file and upload it as a new file

why?

Telegram fetches the media information from URL and caches it. Small enough MP4 files with no audio are considered Animations. MP4 with audio will be a Video.

I want to send an album with two mp4 files, one has audio and one has not, so they are Video + Animation. And Telegram does not allow me to send them as media group as Media Groups cannot contain Animations.

If i upload files from disk with InputFile then it's ok:

ctx.replyWithMediaGroup([
    {
        type: "video",
        media: new InputFile("/tmp/evv/no-audio.mp4"),
    },
    {
        type: "video",
        media: new InputFile("/tmp/evv/yes-audio.mp4"),
    },
])

Telegram won't use it's cache on uploads and will consider my mp4 files a Videos.

If I use URL:

ctx.replyWithMediaGroup([
    {
        type: "video",
        media: "https://dev.nitra.pl/no-audio.mp4",
    },
    {
        type: "video",
        media: "https://dev.nitra.pl/yes-audio.mp4",
    },
])

Telegram will return:

{
  "ok": false,
  "error_code": 400,
  "description": "Bad Request: wrong file identifier/HTTP URL specified"
}

Proposed change could be useful in other cases as well - like my bot has access to some url, but telegram servers don't - useful with local http-based microservices.

I can of course download the file manually but it would be a much smoother experience to use something like that:

ctx.replyWithMediaGroup([
    {
        type: "video",
        media: new ReuploadFile("https://dev.nitra.pl/no-audio.mp4"),
    },
    {
        type: "video",
        media: new ReuploadFile("https://dev.nitra.pl/yes-audio.mp4"),
    },
])

ctx.answerCbQuery not a function

I am currently refactoring my bot from telegraf to grammy. However, some functionality I cannot rebuild because I am missing the function answerCbQuery(). I am sending the following inline button which has a corresponding method for 'callback_query'.

My bot.on() is triggred and my code reaches inside the below if-statement but then cannot access the method returning (TypeError: ctx.answerCbQuery is not a function)

Button Template

const buttons = [
      [
        {
          text: 'Click for callback query ',
          callback_data: 'test_button',
          hide: false
        }
]

await bot.api.sendPhoto(chatId,
      https://picsum.photos/1080/1080,
      {
        caption: 'test caption',
        parse_mode: 'markdown',
        reply_markup: {
          columns: 1,
          inline_keyboard: buttons
        }
      })

bot.on listener

bot.on('callback_query', async (ctx, next) => { 
 if (ctx.callbackQuery.data.includes('test_')) {
  await ctx.answerCbQuery('test callback clicked', true)
 } else {
  return next()
 }
}

I have tried various solutions from ranging ctx.api.answerCbQuery('...') to bot.api.answerCbQuery('...'). However, I cannot access this method. Which was possible in my previous setup without a problem. Any help appreciated.

Bug: undefined and unspecified have different behavior

I migrated yet another bot to grammy and found out that there is something fishy here.
I do not use transformers or something, its so far a plain Telegraf → grammY migration.

Technically telegraf-inline-menugrammy-inline-menu is used which sets undefined to some properties instead of omitting them. I recreated the issue with plain grammY calls but haven't looked into it what exactly might cause this issue.

The goal is to send a photo without caption.

await context.replyWithPhoto(media, {caption: undefined});

Ends up creating a photo message with a caption which is literally saying "undefined".

await context.replyWithPhoto(media, {caption: undefined, parse_mode: undefined});

Fails with "Call to 'sendPhoto' failed! (400: Bad Request: unsupported parse_mode)"
Part of the error contains this, which indicated the error still thinks the undefined is undefined and not the string "undefined":

{
  method: 'sendPhoto',
  payload: {
    chat_id: 2956631,
    photo: 'attach://hyyujfn4azjcick6',
    caption: undefined,
    parse_mode: undefined
  },
  ok: false,
  error_code: 400,
  description: 'Bad Request: unsupported parse_mode',
  parameters: {}
}

I did not create a minimal bot to check these calls in a simple environment but as the bot is not that complex I dont think there are side effects to it currently. But I might be wrong here.

feat: Re-export Bot API types

They currently are exported from @grammyjs/types, so users have to have two explicit dependencies. This causes some trouble:

  • deps can get out of sync
  • especially bad on Deno because it always checks the entire project
  • not intuitive to install a second package, while the main one already uses the types
  • the InputFileProxy type is hard to understand

Implementing this has some disadvantages, too:

  • adds ~170 type exports to grammY (can be mitigated with a namespace), but still makes API surface much larger
  • API reference will now mix grammY's definitions with those of the Bot API

feat: Keyboard button builders and `Keyboard.from`

Currently, we can only create complete keyboards using the keyboard plugin. It may be useful to add helpers that create individual buttons.

We should then add a static method Keyboard.from which takes a 2D-array of button objects, and creates a keyboard from them. Optionally, it handles reshapes on the fly. As a result, you can also pass a 1D-array and a number of columns/rows and the created keyboard will have the specified dimensions.

All of the above should be done for inline keyboards too.

Requested by @Loskir in the discussion around https://t.me/grammyjs/11332

Unable to run the introductory demo

`const { Bot } = require("grammy");

// 创建一个Bot类的实例,并将你的认证令牌传给它。
const bot = new Bot("21000xxxxxx:AAHEfQRNY28AG6pCJQkVd4T5K6d5MMxxxxxx"); // <-- 把你的认证令牌放在""之间
// 你现在可以在你的 bot 对象 bot 上注册监听器。
// 当用户向你的 bot 发送消息时, grammY 将调用已注册的监听器。

// 对 /start 命令作出反应
bot.command("start", (ctx) => ctx.reply("Welcome! Up and running."));
// 处理其他的消息
bot.on("message", (ctx) => ctx.reply("Got another message!"));

// 现在,你已经确定了将如何处理信息,可以开始运行你的 bot 。
// 这将连接到 Telegram 服务器并等待消息。

// 启动你的 bot
try {
bot.start()
} catch (error) {
console.log(error)
}
`
通过 node main.js 无法正常运行
无任何报错信息
image

I need to go through a proxy to access Telegram, do I need to add additional configuration when new bot()?

Better separation of init and start

Currently, init() and start() are too heavily coupled which makes it harder to extend them.

My use case is:

  1. I want to init() and start() bot separately. That is, because by calling init() I make sure the token is correct etc. and must do some housekeeping before entering the (endless) start(). Also, if I understood correctly if I switch to webhooks I will not be calling start() at all yet I will need init()?
  2. I want to subclass Bot with and override init() with some extra code.

Currently, init() is called unconditionally in start:

grammY/src/bot.ts

Lines 305 to 307 in dc0df47

async start(options?: PollingOptions) {
// Perform setup
await withRetries(() => this.init());

init() itself prevents double call by checking this.me:

grammY/src/bot.ts

Lines 215 to 218 in dc0df47

async init() {
if (this.me === undefined) {
debug("Initializing bot");
const me = await this.api.getMe();
(but it still warns that the bot has been initialised!)

However, it is not possible to use that in a subclass, because this.me is private:

private me: UserFromGetMe | undefined;

What I suggest is, at the very least, prevent start() from calling init() if it has been called already.

reply with poll bug

when I submit a vote using the bot and the first person votes, I get the following error. looks like a bug?
the session in the bot is configured and works in all cases, except for this
CleanShot 2021-12-01 at 04 12 46@2x

Feature request: Telegram MTProto API support

I am new to the grammY framework. To date, I have been using the TelegrafJS framework.
The grammY documentation is very well written and understandable. After reading it, I realized that grammY, like Telegraf, only has support for the Telegram HTTP Bot API.
If grammY itself had support for the Telegram MTProto API, it would open the door to great opportunities for users!

I would like to ask you to add support for Telegram MTProto at some point.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.