grammyjs / grammy Goto Github PK
View Code? Open in Web Editor NEWThe Telegram Bot Framework.
Home Page: https://grammy.dev
License: MIT License
The Telegram Bot Framework.
Home Page: https://grammy.dev
License: MIT License
My code is look like:
bot.catch((err) => {
const ctx = err.ctx;
console.error(`Error while handling update ${ctx.update.update_id}:`);
const e = err.error;
console.log( `Some error was catch: `, e );
});
run( bot );
// ... in 5 seconds after bot has started:
try {
bot.api.sendMessage( -10000000000, `some text` );
} catch ( e ) {
console.log( `Some error was catch: `, e );
}
After starting a bot I try to send a message to a chat from where my bot was removed:
/home/user/node_modules/grammy/out/core/client.js:110
throw new error_js_1.GrammyError(`Call to '${method}' failed!`, data, method, payload);
^
GrammyError: Call to 'sendMessage' failed! (403: Forbidden: bot was kicked from the supergroup chat)
at ApiClient.callApi (/home/user/node_modules/grammy/out/core/client.js:110:19)
at processTicksAndRejections (node:internal/process/task_queues:96:5) {
method: 'sendMessage',
payload: { chat_id: -10000000000, text: 'some text' },
ok: false,
error_code: 403,
description: 'Forbidden: bot was kicked from the supergroup chat',
parameters: {}
}
Node.js v17.0.1
I was expected that bot shouldn't crash in this case of error.
Is there a way to prevent bot from crashing if I try to send to chat to where I'm not allowed to send a message?
Hi all.
I want to say thanks to all grammY developers for this superb and excellent documented library!
I'm currently using telegraf 4 baseScene as usual ctx.scene.enter
, ctx.scene.leave
, and inline keyboard callback/action scene, and so on...
So, how is the better way to migrate scenes logic to grammY?
Regards!
I'm in the process of porting nest-telegraf to grammy. I like where this is going, but I'm struggling with a specific piece I could use some help with (or guidance if this is a lost cause):
I'm struggling with their Create Listener decorator. Here's how they use it:
export const On = createListenerDecorator('on');
// or
export const Hears = createListenerDecorator('hears');
// or
export const Command = createListenerDecorator('command');
I'd love some assistance in getting this sorted out, so I could use it properly in my code. The goal is to be able to do:
@Start()
onStart(): string {
return 'Say hello to me';
}
@Hears(['hi', 'hello', 'hey', 'qq'])
onGreetings(
@UpdateType() updateType: TelegrafUpdateType,
@Sender('first_name') firstName: string,
): string {
return `Hey ${firstName}`;
}
When I pass inexistent path to InputFile
grammY probably still tries to read it. This simple code:
bot.on("message", (ctx) => {
console.log("start", new Date().toISOString());
ctx.replyWithVideo(new InputFile("/tmp/this.does.not.exist")).catch(e => {
console.error(e);
console.error("end", new Date().toISOString());
});
});
Results in:
node | start 2021-09-25T10:27:57.567Z
node | HttpError: Network request for 'sendVideo' failed!
node | at ApiClient.call (/home/node/app/node_modules/grammy/out/core/client.js:55:27)
node | at processTicksAndRejections (node:internal/process/task_queues:96:5)
node | at ApiClient.callApi (/home/node/app/node_modules/grammy/out/core/client.js:68:22) {
node | error: FetchError: request to https://api.telegram.org/xxxxxxxxxxxxxxxxxxxx/sendVideo failed, reason: socket hang up
node | at ClientRequest.<anonymous> (/home/node/app/node_modules/node-fetch/lib/index.js:1461:11)
node | at ClientRequest.emit (node:events:402:35)
node | at ClientRequest.emit (node:domain:475:12)
node | at TLSSocket.socketOnEnd (node:_http_client:471:9)
node | at TLSSocket.emit (node:events:402:35)
node | at TLSSocket.emit (node:domain:475:12)
node | at endReadableNT (node:internal/streams/readable:1343:12)
node | at processTicksAndRejections (node:internal/process/task_queues:83:21) {
node | type: 'system',
node | errno: 'ECONNRESET',
node | code: 'ECONNRESET'
node | }
node | }
node | end 2021-09-25T10:28:57.618Z
Just following grammy's guide resulted in no responses when using webhook replies.
I traced the problem down to grammy handing back a JSON string without setting the header. To work around this, I had to add a middleware to set the Content-Type:
app.use(async (ctx, next) => {
ctx.set("Content-Type", "application/json");
await next();
});
// Make bot callable via webhooks
app.use(webhookCallback(bot, "koa"));
This problem occurred for both express and koa.
My bot1.ts:
import { Bot } from "grammy";
const bot = new Bot( "token" );
bot.command( "start", ( ctx ) => ctx.reply( "Welcome! Up and running." ) );
bot.on( "message", ( ctx ) => ctx.reply( "Got another message!" ) );
bot.start();
Then:
» npx tsc
» node bot1.js
/home/user/nodejs/node_modules/grammy/out/core/client.js:110
throw new error_js_1.GrammyError(`Call to '${method}' failed!`, data, method, payload);
^
GrammyError: Call to 'getMe' failed! (404: Not Found)
at ApiClient.callApi (/home/user/nodejs/node_modules/grammy/out/core/client.js:110:19)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Bot.init (/home/user/nodejs/node_modules/grammy/out/bot.js:167:24)
at async Bot.start (/home/user/nodejs/node_modules/grammy/out/bot.js:251:9) {
method: 'getMe',
payload: undefined,
ok: false,
error_code: 404,
description: 'Not Found',
parameters: {}
}
Node.js v17.0.1
I installed grammy via: npm i grammy
.
What have I missed?
Reported in https://t.me/grammyjs/41642
ctx has type 'never' in composer.command when I set exactOptionalPropertyTypes in tsconfig to true.
const cmp = new Composer();
cmp.command('start', (ctx) => ctx.reply('test'));
Property 'reply' does not exist on type 'never'
bot1.ts:
import { Bot } from "grammy";
» tsc bot1.ts
../../node_modules/@types/express-serve-static-core/index.d.ts:584:18 - error TS2430: Interface 'Response<ResBody, Locals, StatusCode>' incorrectly extends interface 'ServerResponse'.
Property 'req' is optional in type 'Response<ResBody, Locals, StatusCode>' but required in type 'ServerResponse'.
584 export interface Response<
~~~~~~~~
../../node_modules/@types/express/index.d.ts:58:55 - error TS2344: Type 'Response<any, Record<string, any>>' does not satisfy the constraint 'ServerResponse'.
Property 'req' is optional in type 'Response<any, Record<string, any>>' but required in type 'ServerResponse'.
58 var static: serveStatic.RequestHandlerConstructor<Response>;
~~~~~~~~
../../node_modules/grammy/out/core/client.d.ts:1:23 - error TS2688: Cannot find type definition file for 'node-fetch'.
1 /// <reference types="node-fetch" />
~~~~~~~~~~
Found 3 errors.
» tsc -v
Version 4.4.4
» node -v
v17.0.1
What's wrong with my config?
The filter queries have a feature called twin properties, which will make some properties guaranteed if other ones are present. For example, animation messages always have the document property specified, and the filter queries account for that.
So far, they only work on L2. However, there are fields such as from
(only absent in channel post messages) or edit_date
(only absent for new messages) which need to inspect L1 to in order to make an assumption about L2 properties. In other words, we cannot simply assume that the twin properties can be read from a flat list. Instead, we need to to build up nested objects structures that can be merged (intersected) into the result type in order to support twins that have dependencies across several levels.
I've added my bot to some public chats (type: supergroup). How to specify a chat for sending a message to just one chat only?
Do I need to store all contextes and call ctx.reply()
for all contextes?
How to store valid cotextes only?
It would be nice to able to connect to multiple sessions at the same time. The problem with static typing that prevented this from being implemented can be solved by requiring an exhaustive list of properties upon session plugin instantiation. This will also lead to code with higher concurrency.
Hello. It looks as if the session key cannot be resolved for payment.
This happens after I click "pay 120 rub" in the dialog box from telegram, after I filled my card on a third-party service (UKassa)
Middlewres:
session:
export const middleware = () =>
session({
initial: () => ({
user: {
isRegistered: false,
},
}),
storage,
});
i18n:
export const middleware = () =>
useFluent({
fluent,
localeNegotiator: (ctx: Context) =>
ctx.session.user.languageCode || ctx?.from?.language_code,
} as GrammyFluentOptions & {
localeNegotiator: LocaleNegotiator<Context>;
});
Also composer code, if it will be helpful:
composer.command('test', async ctx => {
await ctx.replyWithInvoice(
ctx.t('subscription_fill_up_balance_title', { count }),
ctx.t('subscription_fill_up_balance_title', { count }),
JSON.stringify({
unique_id: `${ctx.from!.id}_${Number(new Date())}`,
provider_token: config.BOT_PAYMENT_PROVIDER_TOKEN,
}),
config.BOT_PAYMENT_PROVIDER_TOKEN,
'RUB',
[{ label: count.toString(), amount: count * 100 }]
);
})
composer.on('pre_checkout_query', (ctx) => ctx.answerPreCheckoutQuery(true));
composer.on(':successful_payment', async (ctx, next) => {
await ctx.reply('SuccessfulPayment');
});
Stack:
Error while handling update 479364603:
Unknown error: Error: Cannot access session data because the session key was undefined!
at Context.get (/home/satont/Projects/funpay/apps/bot/node_modules/grammy/out/convenience/session.js:67:27)
at localeNegotiator (/home/satont/Projects/funpay/apps/bot/src/middlewares/setup-i18n.middleware.ts:14:11)
at negotiateLocale (/home/satont/Projects/funpay/apps/bot/node_modules/@moebius/grammy-fluent/src/middleware.ts:95:15)
at fluentMiddleware (/home/satont/Projects/funpay/apps/bot/node_modules/@moebius/grammy-fluent/src/middleware.ts:81:11)
at /home/satont/Projects/funpay/apps/bot/node_modules/grammy/out/composer.js:61:41
at /home/satont/Projects/funpay/apps/bot/node_modules/grammy/out/composer.js:71:19
at /home/satont/Projects/funpay/apps/bot/src/middlewares/setup-logger.middleware.ts:16:10
at /home/satont/Projects/funpay/apps/bot/node_modules/grammy/out/composer.js:61:41
at /home/satont/Projects/funpay/apps/bot/node_modules/grammy/out/composer.js:71:19
at AsyncLocalStorage.run (node:async_hooks:320:14)
Node: 16
Grammy: 1.6.2
I'm thinking about allowing
for await (const data of ctx.session) {
// handle session data of user
}
which would effectively allow you to enumerate all sessions.
This would be optional to implement for storage adapters. Hence, it is a non-breaking change. Storage adapters that do not support this will simply throw an error when this is attempted.
Originally posted by @KnorpelSenf in #130 (comment)
How do I ignore if the user block the bot while there's still pending update? Tried bot.errorHandler
but it's still crash the server if the user block the bot
Let's say I want to build a middleware tree:
∟ filter private chat
∟ filter text
∟ filter sticker
∟ filter group chat
∟ filter text
∟ filter sticker
If I understand correctly, that's how one achieves that:
const privateChatComposer = new Composer<BotContext>()
privateChatComposer.on("message:text", ...)
privateChatComposer.on("message:sticker", ...)
const groupChatComposer = new Composer<BotContext>()
groupChatComposer.on("message:text", ...)
groupChatComposer.on("message:sticker", ...)
bot.filter((ctx) => ctx.chat?.type === "private", privateChatComposer)
bot.filter((ctx) => ctx.chat?.type === "group", groupChatComposer)
I propose to create a helper such as Composer.create
(or perhaps a global compose
function) which allows to make the code above more concise:
bot.filter((ctx) => ctx.chat?.type === "private", Composer.create(bot => {
bot.on("message:text", ...)
bot.on("message:sticker", ...)
}))
bot.filter((ctx) => ctx.chat?.type === "group", Composer.create(bot => {
bot.on("message:text", ...)
bot.on("message:sticker", ...)
}))
implemented similarly to:
import { Composer, Context } from "grammy"
// Sample implementation as a global function.
// Could as well be a static method on Composer.
export function compose<C extends Context>(setup: (composer: Composer<C>) => void) {
const composer = new Composer<C>()
setup(composer)
return composer.middleware()
}
UPDATE: based on the discussion below and prototyping, my current suggestion is:
bot.filter((ctx) => ctx.chat?.type === "private").setup(privateChat => {
privateChat.on("message:text", ...)
privateChat.on("message:sticker", ...)
})
Implemented like this:
export class Composer<C extends Context> {
/** Run the provided setup function against the current composer. */
setup(setup: (composer:this) => void) {
setup(this)
return this
}
}
See monkey patch: https://github.com/IlyaSemenov/grammy-scenes/blob/87b5fd0b9153436bcf592bd47ff0e4e9bc8c208d/src/monkey_patches/composer_setup.ts
https://telegram.org/blog/protected-content-delete-by-date-and-more introduced the ability for bots to proactively send a message to users when they request becoming a member of a chat.
Currently, grammY does not support getting the user identifier via ctx.from.id
in that case. It is also not possible to send messages to the group upon incoming group requests. It would be neat if we could simply reply to chat join requests, and this sends a message to the respective user.
Suggestion:
// Define admission test
const menu = new Menu('life-universe-everything')
menu.text('13', ctx => ctx.declineChatJoinRequest(ctx.from.id))
menu.text('42', ctx => ctx.approveChatJoinRequest(ctx.from.id))
menu.text('1337', ctx => ctx.declineChatJoinRequest(ctx.from.id))
bot.use(menu)
// Send admission tests, and notify group
bot.on('chat_join_request', async ctx => {
await ctx.api.sendMessage(ctx.from.id, 'Hi! What is the answer?', { reply_markup: menu })
await ctx.reply(`I contacted ${ctx.from.username ?? ctx.from.first_name} to see if they may join!`)
})
Hi, with telegraf, it was possible to change the name of the file with:
bot.telegram.sendDocument(chat_id, { source: `name.json`, filename: `other_name.json`}, [{ disable_notification: true }] );
How to do this with grammyY, I only found this:
bot.api.sendDocument(chat_id, new InputFile('name.json'))
Hi, cool project!
I tried using grammY in a Deno deploy function and got the error
TypeError: cannot read property 'query' of undefined grammy Deno
in this line.
The line is let res = await Deno.permissions.query(env)
and as I understand it the problem is that Deno deploy does not expose the Deno.permissions
API.
Not sure if this is something worth/possible doing anything about, but it would be very nice to be able to use this framework for deploying telegram bots on Deno deploy.
Thank you for the framework.
Just like we have a builder for InlineKeyboard
objects, we want one that simplifies creating media group objects which can be passed to the media
property of https://core.telegram.org/bots/api#sendmediagroup
Is there a way to handle live videos feeds?
I see there is a method to handle videos files, but I would like to know if there is a way to handle live video feeds. My usecase , I'm trying to identify objects from the live video feeds.
There's a problem with the deployed website, you might already know about.
If you go to grammy.dev and click API Reference - the link to https://doc.deno.land/https://deno.land/x/grammy/mod.ts
will be broken and show you 502 - DEPLOYMENT_FAILED
.
I'm not sure if it's on your end or Deno deploy. Yet, that would be nice to look into.
Thank you!
There are five different possible types of message authors.
In each case, the combination of the values ctx.msg.chat.id
, ctx.msg.is_automatic_forward
, and ctx.msg.sender_chat.id
is different.
It could be cool to not only have #105 but also something like
bot.senderType('user') // all regular user messages (includes bots)
bot.senderType(['channel-post', 'anonymous-admin']) // channel admins in channel or group
// etc
The exact strings should probably be different, please suggest something.
Does anyone need this?
This is a list of tasks that should be completed before the 1.10 release. All of them focus on improving how well the core library is suited to be used in conversations.
This list is tentative. Things may be added or removed if we figure that that makes sense.
There is no ETA—it will be released once everything is done. It goes faster if you contribute.
If you want to help out but you don't know how to decide, please leave a comment here. If you want to work on some issue but you don't know how to begin, please leave a comment there. In any case, you can join the community chat and ask.
Passing an iterator to InputFile
means that the respective API call cannot be repeated. This is because the iterator may already be (partially) consumed, hence corrupting the file data.
Supplier functions could come to the rescue:
new InputFile(() => createNewIterator())
would allow users to pass a function that can return any of the values that are currently supported. This function will be invoked once per API call attempt.
Here is a list of ideas that could be turned into plugins.
You can suggest new ideas in the comments or in the group chat, and we may add them here. Once someone publishes an official plugin in version 1.0, we will remove the idea from this list. You can also decide to start working on a third-party plugin. In that case, you also need to link it on https://grammy.dev. You should be able to demonstrate that your plugin is working.
Deduplicates updates delivered via webhooks. If users follow https://grammy.dev/guide/deployment-types.html#how-to-use-webhooks then no duplicates should appear, but advanced use cases may still require this.
Helps sending messages to many people, or performing other bulk operations.
Helps sending long messages.
Helps creating complex code bases.
Helps writing tests for bots.
editMessageText
, editMessageCaption
, editMessageMedia
, editMessageLiveLocation
, editMessageReplyMarkup
)const progress = ctx.progress(chat_id, message_id, "style")
await progress.set(0.3) // 30 %
How to reproduce:
deno upgrade
deno cache "https://deno.land/x/[email protected]/mod.ts"
Version:
deno --version
deno 1.20.3 (release, x86_64-unknown-linux-gnu)
v8 10.0.139.6
typescript 4.6.2
Logs:
Check https://deno.land/x/[email protected]/mod.ts
error: TS2769 [ERROR]: No overload matches this call.
Overload 1 of 2, '(input: string | Request, init?: RequestInit | undefined): Promise<Response>', gave the following error.
Argument of type 'string | URL' is not assignable to parameter of type 'string | Request'.
Type 'URL' is not assignable to type 'string | Request'. Overload 2 of 2, '(input: URL, init?: RequestInit | undefined): Promise<Response>', gave the following error.
Argument of type 'string | URL' is not assignable to parameter of type 'URL'.
Type 'string' is not assignable to type 'URL'.
const { body } = await fetch(url);
~~~
at https://deno.land/x/[email protected]/platform.deno.ts:134:34
TS2322 [ERROR]: Type '((root: string, token: string, method: string) => URL) | ((root: string, token: string, method: string) => string)' is not assignable to type '(root: string, token: string, method: string) => URL'.
Type '(root: string, token: string, method: string) => string' is not assignable to type '(root: string, token: string, method: string) => URL'.
Type 'string' is not assignable to type 'URL'.
buildUrl: options.buildUrl ??
~~~~~~~~
at https://deno.land/x/[email protected]/core/client.ts:207:13
The expected type comes from property 'buildUrl' which is declared here on type 'Required<ApiClientOptions>'
buildUrl?: (
~~~~~~~~
at https://deno.land/x/[email protected]/core/client.ts:131:5
Found 2 errors.
Deno 1.17.0 is released and TypeScript 4.5 is embedded.
grammY is broken in deno 1.17.0 by the Awaited
type newly introduced in TypeScript 4.5 (ref).
Just run the command below:
$ deno run --reload https://deno.land/x/grammy/mod.ts
Error message:
error: TS2322 [ERROR]: Type 'S | undefined' is not assignable to type 'Awaited<S> | undefined'.
Type 'S' is not assignable to type 'Awaited<S> | undefined'.
Type 'S' is not assignable to type 'Awaited<S>'.
v = options.initial?.();
^
at https://deno.land/x/[email protected]/convenience/session.ts:273:17
Admittedly, that title was clickbait. It seems like you are interested in scenes or wizards, and other programming patters that allow you to define conversational interfaces like they were a finite-state machine (FSM, wiki).
THIS IS A A LOT OF TEXT. Please still read everything before you comment. Let's try to keep the signal-to-noise ratio up high on this one :)
One of the features that get requested most often are scenes. This issue shall:
A chat is a conversational interface. This means that the chat between the user and the bot evolves over time. Old messages stay relevant when processing current ones, as they provide the context of the conversation that determines how to interpret messages.
< /start
>>> How old are you?
< 42
>>> Cool, how old is your mother?
< 70
>>> Alright, she was 28 when you were born!
Note how the user sends two messages, and both are numbers. We only know that those two numbers mean two different things because we can follow the flow of the conversation. The two age numbers are following up two different questions. Hence, in order to provide a natural conversational flow, we must store the history of the chat, and take it into account when interpreting messages.
Note that Telegram does not store the chat history for bots, so you should store them yourself. This is often done via sessions, but you can also use your own database.
In fact, we often don't need to know the entire chat history. The few most recent messages are often enough to remember, as we likely don't have to care about what the user sent back in 2018. It is therefore common to construct state, i.e. a small bit of data that stores where in the conversation where are. In our example, we would only need to store if the last question was about the age of the user, or about the age of their mother.
Scenes are a way to express this conversational style by allowing you to define a finite-state machine. Please google what this is, it is essential for the following discussion. The state is usually stored in the session data. They achieve this by isolating a part of the middleware into a block that can be entered and left.
Different bot frameworks have different syntax for this, but it typically works roughly like this (explanatory code, do not try to run):
// Define a separate part of the middleware handling.
const scene = new Scene('my-scene')
scene.command('start', ctx => ctx.reply('/start command from inside scene'))
scene.command('leave', ctx => ctx.scene.leave()) // leave scene
// Define regular bot.
const bot = new Bot('secret-token')
bot.use(session())
bot.use(scene)
bot.command('start', ctx => ctx.reply('/start command outside of scene'))
bot.command('enter', ctx => ctx.scene.enter('my-scene')) // enter scene
bot.start()
This could result in the following conversation.
< /start
>>> /start command outside of scene
< /enter
< /start
>>> /start command from inside scene
< /leave
< /start
>>> /start command outside of scene
In a way, every scene defines one step of the conversation. As you can define arbitrarily many of these scenes, you can define a conversational interface by creating a new instance of Scene
for every step, and hence define the message handling for it.
Scenes are a good idea. The are a huge step forward from only defining dozens of handlers on the same middleware tree. Bots that do not use scenes (or a similar form of state management) are effectively forgetting everything that happened in the chat immediately after they're done handling a message. (If they seem like they remember their context, then this is more or less a workaround which relies on a message that you reply to, inline menus, or other information in order to avoid state management.)
Scenes effectively reduce the flow of a conversation to being in a state, and then transitioning into another state (ctx.scene.enter('goto')
). This can be illustrated by translating scenes into routers:
const scene = new Router(ctx => ctx.session.scene)
// Define a separate part of the middleware handling.
const handler = new Composer()
scene.route('my-scene', handler)
handler.lazy(ctx => {
const c = new Composer()
c.command('start', ctx => ctx.reply('/start command from inside scene'))
c.command('leave', ctx => ctx.session.scene = undefined) // leave scene
return c
})
// Define regular bot.
const bot = new Bot('secret-token')
bot.use(session())
bot.use(scene)
bot.command('start', ctx => ctx.reply('/start command outside of scene'))
bot.command('enter', ctx => ctx.session.scene = 'my-scene') // enter scene
bot.start()
Instead of creating new Scene
objects, we simply create new routes, and obtain the same behaviour with minimally more code.
This may work if you have two states. It may also work for three. However, the more often you instantiate Scene
, the more states you add to your global pool of states, between which you're jumping around arbitrarily. This quickly becomes messy. It takes you back to the old days of defining a huge file of code without indentation, and then using GOTO to move around. This, too, works at a small scale, but considering GOTO harmful led to a paradigm shift that substantially advanced programming as a discipline.
In Telegraf, there are some ways to mitigate the problem. For example, once could add a way to group some scenes together into a namespace. As an example, Telegraf calls the Scene
from above a Stage
, and uses the word scene to group together several stages. It also allows you to force certain stages into a linear history, and calls this a wizard, in analogy to the multi-step UI forms.
With grammY, we try to rethink the state of the art, and to come up with original solutions to long standing problems. Admitting that Update
objects are actually pretty complex objects led us to giving powerful tools to bot developers: filter queries and the middleware tree were born, and they are widely used in almost all bots. Admitting that sending requests is more than just a plain HTTP call (at least when you're working with Telegram) led us to developing API transformer functions: a core primitive that drastically changes how we think about plugins and what they can do. Admitting that long polling at scale is quite hard led us to grammY runner: the fastest long polling implementation that exists, outperforming all other JS frameworks by far.
Regarding conversational interfaces, the best we could come up with so far is GOTO. That was an okay first step a few years ago. Now, it is time to admit that this is harmful, and that we can do better.
Not too much. Which is why this issue exists. So far, we've been recommending people to combine routers and sessions, rather than using scenes, as it does not use much more code, and providing the same plain old scenes for grammY is not ambitious enough.
There is a branch in this repository that contains some experiments with the future syntax that could be used, however, the feedback for it was mixed. It does bring some improvements to the situation as it provides a structure between the different steps in the conversation. Unfortunately, the resulting code is not too readable, and it makes things that belong together end up in different places of the code. It is always cool if the things that are semantically linked can be written close to each other.
As a consequence of this lack of progress, we need to have a proper discussion with everyone in the community in order to develop a more mature approach. The next section will suggest two ideas, one of them is the aforementioned one. Your feedback and ideas will impact the next step in developing conversational interfaces. Please speak up.
This suggestion is the one the we've mentioned above. Its main contribution is to introduce a more implicit way of defining scenes. Instead of creating a new instance of a class for every step, you can just call conversation.wait()
. This will internally create the class for you. As a result, you can have a more natural way of expressing the conversation. The wait
calls make it clear where a message from the user is expected.
Here is the example from the top again. Handling invalid input is omitted intentionally for brevity.
const conversation = new Conversation('age-at-birth')
conversation.command('start', async ctx => {
await ctx.reply('How old are you'))
ctx.conversation.forward()
})
conversation.wait()
conversation.on('message:text', async ctx => {
ctx.session.age = parseInt(ctx.msg.text, 10)
await ctx.reply('Cool, how old is your mother?')
ctx.conversation.forward()
})
conversation.wait()
conversation.on('message:text', async ctx => {
const age = parseInt(ctx.msg.text, 10)
await ctx.reply(`Alright, she was ${age - ctx.session.age} when you were born!`)
ctx.conversation.leave()
})
This provides a simple linear flow that could be illustrated by
O
|
O
|
O
We can jump back and forth using ctx.conversation.forward(3)
or ctx.conversation.backward(5)
.
The wait
calls optionally take string identifiers if you want to jump to a specific point, rather than giving a relative number of steps.
Next, let us see how we can branch out, and have an alternative way of continuing the conversation.
const conversation = new Conversation('age-at-birth')
conversation.command('start', async ctx => {
await ctx.reply('How old are you'))
ctx.conversation.forward()
})
conversation.wait()
// start a new sub-conversation
const invalidConversation = conversation.filter(ctx => isNaN(parseInt(ctx.msg.text))).diveIn()
invalidConversation.on('message', ctx => ctx.reply('That is not a number, so I will assume you sent me the name of your pet'))
invalidConversation.wait()
// TODO: continue conversation about pets here
// Go on with regular conversation about age:
conversation.on('message:text', async ctx => {
ctx.session.age = parseInt(ctx.msg.text, 10)
await ctx.reply('Cool, how old is your mother?')
ctx.conversation.forward()
})
conversation.wait()
conversation.on('message:text', async ctx => {
const age = parseInt(ctx.msg.text, 10)
await ctx.reply(`Alright, she was ${age - ctx.session.age} when you were born!`)
ctx.conversation.leave()
})
We have now defined a conversation that goes like this:
O
|
O
| \
O O
That way, we can define conversation flows.
There are a number of improvements that could be done to this. If you have any concrete suggestions, please leave them below.
Newcomers commonly try out something like this.
bot.on('start', async ctx => {
await ctx.reply('How old are you?')
bot.on('message', ctx => { /* ... */ })
})
grammY has a protection against this because it would lead to a memory leak, and eventually OOM the server. Every received /start
command would add a handler that is installed globally and persistently. All but the first are unreachable code, given that next
isn't called inside the nested handler.
It would be worth investigating if we can write a different middleware system that allows this.
const conversation = new Conversation()
conversation.on('start', async ctx => {
await ctx.reply('How old are you?')
conversation.on('message', ctx => { /* ... */ })
})
This would probably lead to deeply nested callback functions, i.e. bring us back to callback hell, something that could be called the GOTO statement of asynchronous programming.
What could we do to mitigate this?
Either way, this concept is still tempting. It is very intuitive to use. It obviously cannot be implemented with exactly the above syntax (because we are unable to reconstruct the current listeners on the next update, and we obviously cannot store the listeners in a database), but could try to figure out if small adjustments could make this possible. Internally, we would still have to convert this into something like an FSM, but maybe one that is generated on the fly. The dynamic ranges of the menu plugin could be used as inspiration here.
Do you have a third idea? Can we combine the approaches A and B? How would you change them? Do you think the examples are completely missing the point? Any constructive feedback is welcome, and so are questions and concerns.
It would be amazing if we could find the right abstraction for this. It exists somewhere out there, we just have to find it.
Thank you!
Currently, these options can only be passed manually in an options object, see https://grammy.dev/plugins/keyboard.html#sending-a-keyboard and subsections.
It may be useful to add a method selective
to the Keyboard
class which sets the respective flags. That could make it simpler to send keyboards.
Create a new repo with a plugin for advanced command filtering that goes beyond what people usually need. This not only allows users to do cooler things, but it also helps the core package stay focused on other things.
Desired features:
ctx.command
property that exposes the invoked command on the context object (useful for what we currently do with bot.command(array)
where it is not possible to find out which command was invoked without inspecting the message text)!
instead of /
commands
> tsc
node_modules/grammy/out/core/client.d.ts:1:23 - error TS2688: Cannot find type definition file for 'node-fetch'.
With typescript 4.4 i received this error when i try compile my project
Just like we have a builder for InlineKeyboard
objects, we want one that simplifies creating inline query result objects which can be passed to the results
property of https://core.telegram.org/bots/api#answerinlinequery
It happens very often that people forget to register chat_member
updates.
We should add composer.chatType
which takes a ChatType | ChatType[]
and filters for the given chat types.
Hello,
I noted the library is using node-fetch as HTTP client:
https://github.com/grammyjs/grammY/blob/main/package.json#L28
If the library uses an isomorphic like isomorphic-unfetch instead, it will enable running the library in more places, like Vercel Edge Functions.
Converted #55 to draft at some point to await a conclusion of the discussion started by https://t.me/grammyjs/18808. Relates to #38/#39.
Originally posted by @KnorpelSenf in #55 (comment)
I want to use for example sendMediaGroup
with an URL but I don't want grammy to pass the URL to Telegram but instead in the background download the file and upload it as a new file
why?
Telegram fetches the media information from URL and caches it. Small enough MP4 files with no audio are considered Animations. MP4 with audio will be a Video.
I want to send an album with two mp4 files, one has audio and one has not, so they are Video + Animation. And Telegram does not allow me to send them as media group as Media Groups cannot contain Animations.
If i upload files from disk with InputFile then it's ok:
ctx.replyWithMediaGroup([
{
type: "video",
media: new InputFile("/tmp/evv/no-audio.mp4"),
},
{
type: "video",
media: new InputFile("/tmp/evv/yes-audio.mp4"),
},
])
Telegram won't use it's cache on uploads and will consider my mp4 files a Videos.
If I use URL:
ctx.replyWithMediaGroup([
{
type: "video",
media: "https://dev.nitra.pl/no-audio.mp4",
},
{
type: "video",
media: "https://dev.nitra.pl/yes-audio.mp4",
},
])
Telegram will return:
{
"ok": false,
"error_code": 400,
"description": "Bad Request: wrong file identifier/HTTP URL specified"
}
Proposed change could be useful in other cases as well - like my bot has access to some url, but telegram servers don't - useful with local http-based microservices.
I can of course download the file manually but it would be a much smoother experience to use something like that:
ctx.replyWithMediaGroup([
{
type: "video",
media: new ReuploadFile("https://dev.nitra.pl/no-audio.mp4"),
},
{
type: "video",
media: new ReuploadFile("https://dev.nitra.pl/yes-audio.mp4"),
},
])
I am currently refactoring my bot from telegraf to grammy. However, some functionality I cannot rebuild because I am missing the function answerCbQuery(). I am sending the following inline button which has a corresponding method for 'callback_query'.
My bot.on() is triggred and my code reaches inside the below if-statement but then cannot access the method returning (TypeError: ctx.answerCbQuery is not a function)
const buttons = [
[
{
text: 'Click for callback query ',
callback_data: 'test_button',
hide: false
}
]
await bot.api.sendPhoto(chatId,
https://picsum.photos/1080/1080,
{
caption: 'test caption',
parse_mode: 'markdown',
reply_markup: {
columns: 1,
inline_keyboard: buttons
}
})
bot.on('callback_query', async (ctx, next) => {
if (ctx.callbackQuery.data.includes('test_')) {
await ctx.answerCbQuery('test callback clicked', true)
} else {
return next()
}
}
I have tried various solutions from ranging ctx.api.answerCbQuery('...') to bot.api.answerCbQuery('...'). However, I cannot access this method. Which was possible in my previous setup without a problem. Any help appreciated.
Do you plan to integrate with NestJs framework?
I migrated yet another bot to grammy and found out that there is something fishy here.
I do not use transformers or something, its so far a plain Telegraf → grammY migration.
Technically telegraf-inline-menu
→ grammy-inline-menu
is used which sets undefined to some properties instead of omitting them. I recreated the issue with plain grammY calls but haven't looked into it what exactly might cause this issue.
The goal is to send a photo without caption.
await context.replyWithPhoto(media, {caption: undefined});
Ends up creating a photo message with a caption which is literally saying "undefined".
await context.replyWithPhoto(media, {caption: undefined, parse_mode: undefined});
Fails with "Call to 'sendPhoto' failed! (400: Bad Request: unsupported parse_mode)"
Part of the error contains this, which indicated the error still thinks the undefined
is undefined
and not the string "undefined"
:
{
method: 'sendPhoto',
payload: {
chat_id: 2956631,
photo: 'attach://hyyujfn4azjcick6',
caption: undefined,
parse_mode: undefined
},
ok: false,
error_code: 400,
description: 'Bad Request: unsupported parse_mode',
parameters: {}
}
I did not create a minimal bot to check these calls in a simple environment but as the bot is not that complex I dont think there are side effects to it currently. But I might be wrong here.
They currently are exported from @grammyjs/types
, so users have to have two explicit dependencies. This causes some trouble:
InputFileProxy
type is hard to understandImplementing this has some disadvantages, too:
Currently, we can only create complete keyboards using the keyboard plugin. It may be useful to add helpers that create individual buttons.
We should then add a static method Keyboard.from
which takes a 2D-array of button objects, and creates a keyboard from them. Optionally, it handles reshapes on the fly. As a result, you can also pass a 1D-array and a number of columns/rows and the created keyboard will have the specified dimensions.
All of the above should be done for inline keyboards too.
Requested by @Loskir in the discussion around https://t.me/grammyjs/11332
It would be neat to have a helper type for every method on Composer
(where it makes sense), such as OnContext
, CommandContext
, etc.
Accordingly, we should have the matching middleware types: OnMiddleware
, CommandMiddleware
, etc.
Requested by @Borodutch in https://t.me/grammyjs/23687
`const { Bot } = require("grammy");
// 创建一个Bot
类的实例,并将你的认证令牌传给它。
const bot = new Bot("21000xxxxxx:AAHEfQRNY28AG6pCJQkVd4T5K6d5MMxxxxxx"); // <-- 把你的认证令牌放在""之间
// 你现在可以在你的 bot 对象 bot
上注册监听器。
// 当用户向你的 bot 发送消息时, grammY 将调用已注册的监听器。
// 对 /start 命令作出反应
bot.command("start", (ctx) => ctx.reply("Welcome! Up and running."));
// 处理其他的消息
bot.on("message", (ctx) => ctx.reply("Got another message!"));
// 现在,你已经确定了将如何处理信息,可以开始运行你的 bot 。
// 这将连接到 Telegram 服务器并等待消息。
// 启动你的 bot
try {
bot.start()
} catch (error) {
console.log(error)
}
`
通过 node main.js 无法正常运行
无任何报错信息
I need to go through a proxy to access Telegram, do I need to add additional configuration when new bot()?
Currently, init()
and start()
are too heavily coupled which makes it harder to extend them.
My use case is:
init()
and start()
bot separately. That is, because by calling init()
I make sure the token is correct etc. and must do some housekeeping before entering the (endless) start()
. Also, if I understood correctly if I switch to webhooks I will not be calling start()
at all yet I will need init()
?Bot
with and override init()
with some extra code.Currently, init()
is called unconditionally in start
:
Lines 305 to 307 in dc0df47
init()
itself prevents double call by checking this.me
:
Lines 215 to 218 in dc0df47
However, it is not possible to use that in a subclass, because this.me
is private:
Line 131 in dc0df47
What I suggest is, at the very least, prevent start()
from calling init()
if it has been called already.
What is the advantage of this framework over other examples?
The reason I became interested in grammy was the complete and clear documentation!
I am new to the grammY framework. To date, I have been using the TelegrafJS framework.
The grammY documentation is very well written and understandable. After reading it, I realized that grammY, like Telegraf, only has support for the Telegram HTTP Bot API.
If grammY itself had support for the Telegram MTProto API, it would open the door to great opportunities for users!
I would like to ask you to add support for Telegram MTProto at some point.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.