Giter Club home page Giter Club logo

virtual-alexa's People

Contributors

allthepies avatar armonge avatar coreycole avatar dependabot[bot] avatar dmarvp avatar ecruzado avatar iamporus avatar jkelvie avatar jperata avatar schiller-manuel avatar unstubbable avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

virtual-alexa's Issues

Add multi-language support

  1. VirtualAlexa.Builder should have a locale(string) method for setting the locale
    It will default to en-US if not set.
    The value will be used in the JSON requests:
    https://github.com/bespoken/virtual-alexa/blob/master/src/SkillRequest.ts#L204

  2. If no interactionModel or intentSchema is supplied, VirtualAlexa should default to looking for things according to the ASK CLI format
    It should check first if a models directory exists.
    If it does and there is no locale specified, it should look for models/en-US.json (the default locale)
    If locale is specified, it should use the locale to lookup the correct model.

If the models directory does not exist, the existing error prompting the user to specify an intentSchema or interactionModel should be shown

bug(audio-player): attributes not preserved when audio player is playing

We have an intent, LogoutIntent, when we call it, it prompts the user to confirm:

Are you sure you want to logout?

So it emits type :ask to the user, expecting a followup confirmation response from them (yes/no). In this followup confirmation yes/no intent, we check the attributes to see which intent is being confirmed by the user. When the audio player is not playing, the attributes are preserved and our followup confirmation intent works as expected. But, when the audio player is playing, the attributes object is empty even though the session with the user has not ended.

When testing with a real Alexa device, it works both when the audio player is playing and not.

Here is the test we are having trouble with:

test('LogoutIntent should log out', async () => {
    const deviceId = uuid();
    alexa.context().device().setID(deviceId);
    expect(alexa.context().device().id()).toEqual(deviceId);

    const logoutResponse = await alexa.intend('LogoutIntent');
    expect(logoutResponse['response'].outputSpeech.ssml).toBeDefined();
    expect(logoutResponse['response'].outputSpeech.ssml).toMatchSnapshot();
    // attributes are defined to guide the confirmation intent

    const logoutConfirmationResponse = await alexa.utter('yes');
    // attributes are defined here, works as intended
    expect(logoutConfirmationResponse['response'].outputSpeech.ssml).toBeDefined();
    expect(logoutConfirmationResponse['response'].outputSpeech.ssml).toMatchSnapshot();

    const launchResponse = await alexa.launch();
    expect(launchResponse['response'].outputSpeech.ssml).toBeDefined();
    expect(launchResponse['response'].outputSpeech.ssml).toMatchSnapshot();

    const loginResponse = await alexa.utter(`My name is ${testConstants.userName}`);
    expect(loginResponse['response'].outputSpeech.ssml).toBeDefined();
    expect(loginResponse['response'].outputSpeech.ssml).toMatchSnapshot();

    const usernameConfirmationResponse = await alexa.utter('yes');
    expect(usernameConfirmationResponse['response'].outputSpeech.ssml).toBeDefined();
    expect(usernameConfirmationResponse['response'].outputSpeech.ssml).toMatchSnapshot();

    const logoutResponse2 = await alexa.intend('LogoutIntent');
    expect(logoutResponse2['response'].outputSpeech.ssml).toBeDefined();
    expect(logoutResponse2['response'].outputSpeech.ssml).toMatchSnapshot();

    const logoutConfirmationResponse1 = await alexa.utter('yes');
    // attributes are defined here, works as intended
    expect(logoutConfirmationResponse1['response'].outputSpeech.ssml).toBeDefined();
    expect(logoutConfirmationResponse1['response'].outputSpeech.ssml).toMatchSnapshot();

    const loginResponse1 = await alexa.utter(`My name is ${testConstants.userName}`);
    expect(loginResponse1['response'].outputSpeech.ssml).toBeDefined();
    expect(loginResponse1['response'].outputSpeech.ssml).toMatchSnapshot();

    const usernameConfirmationResponse1 = await alexa.utter('yes');
    expect(usernameConfirmationResponse1['response'].outputSpeech.ssml).toBeDefined();
    expect(usernameConfirmationResponse1['response'].outputSpeech.ssml).toMatchSnapshot();

    const joinIntentResponse = await alexa.intend('JoinIntent', { channelname: testConstants.channelName });
    expect(joinIntentResponse['response'].outputSpeech.ssml).toBeDefined();
    expect(joinIntentResponse['response'].outputSpeech.ssml).toMatchSnapshot();

    const channelConfirmationResponse = await alexa.utter('yes');
    // at this point, the audio player starts
    expect(channelConfirmationResponse['response'].outputSpeech).toBeUndefined();
    expect(channelConfirmationResponse['response'].shouldEndSession).toBeDefined();
    expect(channelConfirmationResponse['response'].shouldEndSession).toBeTruthy();

    const logoutResponse3 = await alexa.intend('LogoutIntent');
    expect(logoutResponse3['response'].outputSpeech.ssml).toBeDefined();
    expect(logoutResponse3['response'].outputSpeech.ssml).toMatchSnapshot();

    // when testing with alexa device, the attributes are defined here as expected
    // but when testing with virtual-alexa, the attributes object is empty (defined, but empty {})
    // (even though the session is still open i.e. emit ":ask")
    const logoutConfirmationResponse2 = await alexa.intend('SoloSlotIntent', { solo_slot_value: 'yes' });
    // we have tried this with both "utter" and "intend", neither works as expected
    expect(logoutConfirmationResponse2['response'].outputSpeech.ssml).toBeDefined();
    expect(logoutConfirmationResponse2['response'].outputSpeech.ssml).toMatchSnapshot();
}, 15000);

sdk.emit(":tell", ...) doesn't end the session

Description:

According to the specification, or from what I have read online, at least, issuing a :tell command should end the session. When I run my tests, the session seems to persist across tests. Returning endSession() kills the entire thing, so I can't see a way to succinctly end the session in a beforeEach or afterEach. I am using the giftionary codebase as the basis for my app.

Environment:

  • Version: 0.66
  • OS: [e.g. Windows 10] OSX 10.5
  • Node version: 10.11

Steps To Reproduce

Steps to reproduce the behavior:

  1. issue a sdk.emit(":tell" command in a test suite and notice how other tests inherit the state

Expected behavior

Session should end. Other tests should be treated as new sessions

Actual behavior

Tests are all considered in the same session

Add echo buttons support

I am trying to write tests for echo buttons. Do you think of ever adding echo buttons support? That would so much helpful while writing tests for them. Because at the moment it is really annoying building custom requests and so on to test them. That would be awesome if you could just write something like alexa.buttonPress() instead of struggling with requests.

Japanese Support

Validate if multi language support actual code is all it's needed for Japanese to work correctly.

If not made the additions necessary for it.

Google Assistant Integration Research

  • Review which parts of virtual alexa can be set up as a core, so that Alexa details are isolated of the core functionality.
  • Research on google and DialogFlow JSON generation to be able to mimic, as we are doing currently with Alexa.
  • Create Issues from information to detail small steps.

TypeError: this.schemaJSON.intents is not iterable

Using

    "bespoken-tools": "1.2.9",
    "virtual-alexa": "0.5.1",

I'm getting

      at IntentSchema.intents (node_modules/virtual-alexa/lib/src/IntentSchema.js:19:50)
      at IntentSchema.intent (node_modules/virtual-alexa/lib/src/IntentSchema.js:32:30)
      at IntentSchema.hasIntent (node_modules/virtual-alexa/lib/src/IntentSchema.js:41:21)
      at InteractionModel.hasIntent (node_modules/virtual-alexa/lib/src/InteractionModel.js:69:34)
      at new InteractionModel (node_modules/virtual-alexa/lib/src/InteractionModel.js:20:22)
      at VirtualAlexaBuilder.create (node_modules/virtual-alexa/lib/src/VirtualAlexa.js:105:21)
      at Object.done (src/__tests__/integration.test.ts:27:9)

As my intent schema file looks something like this:

{
    "interactionModel": {
        "languageModel": {
            "invocationName": "chat",
            "intents": [
                {
                    "name": "LoginIntent",

This is the new format as of the ask-sdk version 2 release quite recently.

I was able to hack a fix like this, but not sure if this is what you'd like to do in IntentSchema.js:

class IntentSchema {
    constructor(schemaJSON) {
        if (schemaJSON.intents) {
            this.schemaJSON = schemaJSON;
        } else if (schemaJSON.interactionModel.languageModel.intents) {
            this.schemaJSON = schemaJSON.interactionModel.languageModel;
        } else {
            throw new Error('unsupported IntentSchema.json file');
        }
    }
// ...
}

Running into a different problem with the intent name doing this though, haven't tracked the problem down yet

TypeError: Cannot read property 'startsWith' of undefined

in SkillRequest.js when const isBuiltin = intentName.startsWith("AMAZON"); is called

    intentRequest(intentName) {
        const isBuiltin = intentName.startsWith("AMAZON");
        if (!isBuiltin) {
            if (!this.context.interactionModel().hasIntent(intentName)) {
                throw new Error("Interaction model has no intentName named: " + intentName);
            }
        }
        this.requestJSON = this.baseRequest(RequestType.INTENT_REQUEST);
        this.requestJSON.request.intent = {
            name: intentName,
        };
        if (!isBuiltin) {
            const intent = this.context.interactionModel().intentSchema.intent(intentName);
            if (intent.slots !== null && intent.slots.length > 0) {
                this.requestJSON.request.intent.slots = {};
                for (const slot of intent.slots) {
                    this.requestJSON.request.intent.slots[slot.name] = {
                        name: slot.name,
                    };
                }
            }
        }
        return this;
    }

Empty "synonyms" on a SlotType value causes Virtual Alexa to error when attempting to "utter"

When attempting to call the "utter" function on a created VirtualAlexa, and the interaction model contains a slot value with empty synonyms array, the function will throw an error. When using an utterance that resolves to a slot value that doesn't have a synonyms array, it functions correctly.

For info, when using the Amazon Alexa Skill Builder UI in the developer portal, the default behaviour seems to be to add an empty synonyms array to any slot value created using the UI.

Example code to reproduce

index.js

const va = require('virtual-alexa');

const interactionModel = {
    "languageModel": {  
        "intents": [
            {
                "name": "SlottedIntent",
                "samples": ["slot {SlotName}"],
                "slots": [
                    { "name": "SlotName", "type": "SLOT_TYPE" }
                ]
            }
        ],
        "types": [
            {
                "name": "SLOT_TYPE",
                "values": [
                    {
                        "id": null,
                        "name": {
                            "value": "invalid",
                            "synonyms": []
                        }
                    },
                    {
                        "id": null,
                        "name": {
                            "value": "valid"
                        }
                    }   
                ]
            }
        ]
    }
}

const alexa = va.VirtualAlexa.Builder()
    .handler('lambda.handler')
    .interactionModel(interactionModel)
    .create();


alexa.utter('slot invalid')
    .then(response => {
        console.log(response)
    })
    .catch(err => {
        console.log(err);
    });

I'm using a simple lambda function to return the intent name and slot value that is passed in:

lambda.js

exports.handler = function (event, context, callback) {
    var slot;
    if (event.request.intent && event.request.intent.slots) {
        var slotName = Object.keys(event.request.intent.slots)[0];
        slot = event.request.intent.slots[slotName];
    }

    var response = { success: true, slot: slot };
    if (event.request.intent) {
        response.intent = event.request.intent.name;
    }

    if (event.request.intent && event.request.intent.name == "AMAZON.StopIntent") {
        response.response = { shouldEndSession: true };
    }
    context.done(null, response);
}

The expected behaviour should be that the utterance of "Slot invalid" resolves to the "SlottedIntent" with a slot value of "invalid".
{ success: true, slot: { name: 'SlotName', value: 'invalid' }, intent: 'SlottedIntent' }

The actual response is this:

No intentName matches utterance: [object Object]. Using fallback utterance: slot {SlotName}
/home/lea/Code/alexa/va-bug/node_modules/virtual-alexa/lib/src/SkillRequest.js:34
        const isBuiltin = intentName.startsWith("AMAZON");
                                    ^

TypeError: Cannot read property 'startsWith' of undefined
    at SkillRequest.intentRequest (/home/lea/Code/alexa/va-bug/node_modules/virtual-alexa/lib/src/SkillRequest.js:34:37)
    at LocalSkillInteractor.callSkillWithIntent (/home/lea/Code/alexa/va-bug/node_modules/virtual-alexa/lib/src/SkillInteractor.js:91:83)
    at LocalSkillInteractor.spoken (/home/lea/Code/alexa/va-bug/node_modules/virtual-alexa/lib/src/SkillInteractor.js:40:21)
    at VirtualAlexa.utter (/home/lea/Code/alexa/va-bug/node_modules/virtual-alexa/lib/src/VirtualAlexa.js:33:32)
    at Object.<anonymous> (/home/lea/Code/alexa/va-bug/index.js:43:7)
    at Module._compile (module.js:570:32)
    at Object.Module._extensions..js (module.js:579:10)
    at Module.load (module.js:487:32)
    at tryModuleLoad (module.js:446:12)
    at Function.Module._load (module.js:438:3)

Allow for arbitrary skill requests to be created

Allows for more easily requests such as PlaybackController.PreviousCommandIssued and GameEngine.InputHandlerEvent that are not baked into the API yet.

Likely will involve exposing the SkillRequest object and methods for sending it to virtual alexa.

Add session attributes support

Needed to build conversational and prompt like interfaces. It's not possible to test cases like that without session attributes.

How to simulate a linked account?

Is your feature request related to a problem? Please describe.
We are trying to unit test a feature that requires account linking.

Describe the solution you'd like
A way to simulate an account linked device that is able to hit Amazon's profile APIs (we need to hit their APIs once "account linked"

Question about virtual-alexa dynamodb mocking and address api mocking

Hello,

I've been using 'virtual-alexa' for my end-to-end testing needs and it's pretty neat!

However, I am stuck on mocking dynamodb calls and the address api calls.

I tried using 'virtualAlexa.dynamoDB().mock();' but that does not seem to help me when I'm querying using: AWS.DynamoDB.DocumentClient().

This is how it's being used:

this.docClient = new AWS.DynamoDB.DocumentClient();

getUser(alexaUserId) {
const params = {
//TableName: process.env.user_table_name,
TableName: 'UserAttributes',
KeyConditionExpression: '#alexaUserId = :alexaUserId',
ExpressionAttributeNames: {
'#alexaUserId': 'alexaUserId',
},
ExpressionAttributeValues: {
':alexaUserId': alexaUserId,
},
};

return this.docClient.query(params).promise();

}

In the readme: https://github.com/bespoken/virtual-alexa/blob/master/docs/Externals.md#dynamodb

it says that I can make normal calls to DynamoDB but it does not seem to be working. I'm running into something like below:

**TypeError [ERR_INVALID_ARG_TYPE]: The "key" argument must be one of type string, TypedArray, or DataView

  at Object.hmac (node_modules/aws-sdk/lib/util.js:401:30)
  at Object.getSigningKey (node_modules/aws-sdk/lib/signers/v4_credentials.js:62:8)
  at V4.signature (node_modules/aws-sdk/lib/signers/v4.js:97:36)
  at V4.authorization (node_modules/aws-sdk/lib/signers/v4.js:92:36)
  at V4.addAuthorization (node_modules/aws-sdk/lib/signers/v4.js:34:12)
  at node_modules/aws-sdk/lib/event_listeners.js:223:18
  at finish (node_modules/aws-sdk/lib/config.js:320:7)
  at node_modules/aws-sdk/lib/config.js:338:9**

Earlier, I was running into this error:
MissingRequiredParameter: Missing required key 'TableName' in params
and I fixed it by changing: TableName: process.env.user_table_name,
to: TableName: 'UserAttributes',

The key error seems to deal with the KeyConditionExpression formatting. I'd like some more help on this

Also, regarding address api, I tried using the example here: https://github.com/bespoken/virtual-alexa/blob/master/docs/Externals.md#address-api

However, after filling out the mocked address info, I am running into this error:
FetchError: request to https://api.amazonalexa.com/v1/devices/virtualAlexa.deviceID.b9d8670b-ec22-4525-bb7c-d1f091bcba30/settings/address failed, reason: Nock: No match for request {
"method": "GET",
"url": "https://api.amazonalexa.com/v1/devices/virtualAlexa.deviceID.b9d8670b-ec22-4525-bb7c-d1f091bcba30/settings/address",
"headers": {
"authorization": [
"Bearer virtualAlexa.accessToken.201f95ed-1da2-4597-84ef-c165a09ed0c9"
],
"accept-encoding": [
"gzip,deflate"
],
"user-agent": [
"node-fetch/1.0 (+https://github.com/bitinn/node-fetch)"
],
"connection": [
"close"
],
"accept": [
"/"
]
}
}

  at OverriddenClientRequest.<anonymous> (node_modules/node-fetch/index.js:133:11)
  at node_modules/nock/lib/request_overrider.js:221:11

I'm stumped on why this is happening. Help would be much appreciated.

Thank you,
John Chung
[email protected]
(714) 337-0112

Working with two virtualAlexaBuilder

I have test that rely on two alexas connecting simultaneously.

beforeEach(async () => {
    // the below file paths are relative to where package.json is for some reason
    // maybe because it's where the test command is called...
    server = new bst.LambdaServer('./init', 9997, true);
    alexa = new va.VirtualAlexaBuilder()
        .handler('./init.handler')
        .interactionModelFile('./speechAssets/InteractionModel.json')
        .applicationID(getAppId())
        .create();
    alexa2 = new va.VirtualAlexaBuilder()
        .handler('./init.handler')
        .interactionModelFile('./speechAssets/InteractionModel.json')
        .applicationID(getAppId())
        .create();

    alexa.context().device().setID('test-device-id-1');
    alexa2.context().device().setID('test-device-id-2');
    await server.start();
}, 5000);

While testing one suite of tests, I am using only alexa for launching (not alexa2). But, sometimes it gets 'test-device-id-2' instead of 'test-device-id-1'. The occurrence of this is seemingly random. In the beforeEach of my test I am resetting the value of deviceId.

    alexa.context().device().setID('test-device-id-1');
    alexa2.context().device().setID('test-device-id-2');

In the middle of some test (using alexa, not alexa2):

// ...
const logoutResp = await alexa.intend('LogoutIntent');
expect(logoutResp.response.outputSpeech.ssml).toBeDefined();
expect(logoutResp.response.outputSpeech.ssml).toMatchSnapshot();
// ...

The request comes through to my LambdaServer

// ...
export const handler = (event: Alexa.RequestBody<Alexa.Request>, context: Alexa.Context, callback: (err: any, response: any) => void): void => {
    let deviceId = event.context.System.device.deviceId;
    console.log(`[index][handler]: deviceId = ${deviceId}`);
// ...

with deviceId for alexa2 (not alexa)

[index][handler]: deviceId = test-device-id-2

Is handling 2 alexa clients simultaneously supported? It's important for some of my tests as users interact with one another in my skill. It has worked in the past for us, but now we're having issues. I tried updating to the lastest versions of bst and virtual-alexa

"dependencies": {
    "alexa-sdk": "1.0.25",
    "axios": "0.18.0",
    "colors": "1.2.4",
    "dotenv": "5.0.1"
  },
  "devDependencies": {
    "@types/colors": "1.2.1",
    "@types/jest": "^22.2.3",
    "@types/node": "10.0.8",
    "bespoken-tools": "1.2.10",
    "jest": "22.4.3",
    "jshint": "2.9.5",
    "nodemon": "1.17.4",
    "serverless": "1.27.0",
    "serverless-webpack": "5.1.3",
    "ts-jest": "22.4.5",
    "ts-loader": "4.2.0",
    "ts-node": "6.0.2",
    "tslint": "5.10.0",
    "typescript": "2.8.3",
    "virtual-alexa": "0.6.1",
    "webpack": "4.6.0",
    "webpack-cli": "2.1.2",
    "webpack-node-externals": "1.7.2"
},

Dialog slot confirmation, documentation needed

Description:

I have issues getting dialog testing with slot confirmation to work. I guess it's just an issue of lacking documentation / examples.

When testing for multiple slots, I can do something like this:

    alexa.intend("MyIntent").then((payload) => {
        // Send first slot value
        return alexa.intend("MyIntent", { first_value: "1234"});
    }).then((payload) => {
        // Check for OK response and then send second slot value
        return alexa.intend("MyIntent", { second_value: "hello world"});
    });
   // .. and so on

This also works great with IntentConfirmation, ie just setting confirmationStatus = ' CONFIRMED' on the request through a filter and calling the intent again. However, I then added slot confirmation to the skill, which works fine in the Alexa test console (getting the Dialog.ConfirmSlot directive and such), but I can not figure out how to do a proper dialog slot confirmation with virtual-alexa.

The documentation states "However, it is incumbent on the developer to issue directives for ElicitSlot, ConfirmSlot and ConfirmIntent", without any further information. Maybe it's just me not grasping this fully, but I feel the documentation could use an example of this use case.

Environment:

  • Version: 0.6.12
  • OS: macOS 10.14
  • Node version: 10.8.0

Throws an error as elicitSlot when the invoking intent is not associated

Is your feature request related to a problem? Please describe.
Issue an elicitSlot response in a lambda with a slot name which isn't associated with the Intent which triggered the lambda then virtual Alexa accepts this and returns the Dialog elicitSlot response. However, this is an illegal operation as "real Alexa" throws an error as elicitSlot can only be issued with a slot name associated with the invoking intent.

Describe the solution you'd like
Throw an error similar to what Alexa does.

Describe alternatives you've considered
Keep logic as it is, it's a minor difference

Additional context
We should update the documentation

Incorrect type interface for IResponse

Description:

Inside virtual-alexa\lib\src\core\IResponse.d.ts the interface is empty:

export interface IResponse {
}

Environment:

  • Version: 0.6.6
  • OS: Windows 10
  • Node version: 8.11.3

Steps To Reproduce

Steps to reproduce the behavior:

  1. Create project using typescript
  2. Import IResponse, try to access response property on an instance of IResponse

Expected behavior

Code should compile, response property should exist

Actual behavior

Code fails to compile

Code example

import { IResponse } from "virtual-alexa";
let t: IResponse;
console.log(t.response);

Add utterance to alexa.launch() ?

Hi all:

const launchResponse = await alexa.launch();

can simulate a user say "Alexa, open {invocation-name}"

Is it possible to add utterance support to launch() , maybe it is like

await alexa.launch("do some work");

to simulate a user say "Alexa, ask {invocation-name} to do some work" ?

and thank you all, you did a great job.

Ensure additional elements do not appear in generated docs

On our generated docs site, additional elements have crept in:
screenshot 10

Specifically, the AWS and nock elements.

I believe this occurs because those elements members of classes that should be marked private but instead are just left as the default (which is then interpreted as public).

pass handler function directly

How can I pass a handler function directly into virtual-alexa (instead of passing a string)?

My use case: I need to mock several resources when running the tests locally, hence I want to use a custom handler function purely for testing.

Jest detects open handles

I'm using ts-jest to run my virtual-alexa tests. I was running into problems with jest not exiting properly, or returning exit code 0 even when tests were failing. So I added the --forceExit and --detectOpenHandles flags to my jest testing command. This detected open handles in the SkillResponse module seeming to come somewhere from lodash:

Jest has detected the following 6 open handles potentially keeping Jest from exiting:

  โ—  PROMISE

          at Function.resolve (<anonymous>)
      at Object.<anonymous> (node_modules/lodash/_getTag.js:37:32)
      at Object.<anonymous> (node_modules/lodash/_baseClone.js:12:14)


  โ—  PROMISE

          at Function.resolve (<anonymous>)
      at runInContext (node_modules/lodash/lodash.js:6069:36)
      at Object.<anonymous> (node_modules/lodash/lodash.js:17078:11)
      at Object.<anonymous> (node_modules/lodash/lodash.js:17105:3)
      at Object.<anonymous> (node_modules/virtual-alexa/lib/src/core/SkillResponse.js:3:9)


  โ—  PROMISE

          at Function.resolve (<anonymous>)
      at Object.<anonymous> (node_modules/lodash/_getTag.js:37:32)
      at Object.<anonymous> (node_modules/lodash/isEmpty.js:2:14)


  โ—  PROMISE

          at Function.resolve (<anonymous>)
      at Object.<anonymous> (node_modules/lodash/_getTag.js:37:32)
      at Object.<anonymous> (node_modules/lodash/_baseClone.js:12:14)


  โ—  PROMISE

          at Function.resolve (<anonymous>)
      at runInContext (node_modules/lodash/lodash.js:6069:36)
      at Object.<anonymous> (node_modules/lodash/lodash.js:17078:11)
      at Object.<anonymous> (node_modules/lodash/lodash.js:17105:3)
      at Object.<anonymous> (node_modules/virtual-alexa/lib/src/core/SkillResponse.js:3:9)


  โ—  PROMISE

          at Function.resolve (<anonymous>)
      at Object.<anonymous> (node_modules/lodash/_getTag.js:37:32)
      at Object.<anonymous> (node_modules/lodash/isEmpty.js:2:14)

Here's an example of one of my tests that leads to this issue. All of my intents return promises.

test('GeneralHelpIntent should help the user', async () => {
    const deviceId = uuid();
    alexa.context().device().setID(deviceId);
    expect(alexa.context().device().id()).toEqual(deviceId);

    const logoutResponse = await alexa.utter('logout');
    expect(logoutResponse['response'].outputSpeech.ssml).toBeDefined();
    expect(logoutResponse['response'].outputSpeech.ssml).toMatchSnapshot();

    const helpResponse = await alexa.intend('AMAZON.HelpIntent');
    expect(helpResponse['response'].outputSpeech.ssml).toBeDefined();
    expect(helpResponse['response'].outputSpeech.ssml).toMatchSnapshot();

    const launchResponse = await alexa.launch();
    expect(launchResponse['response'].outputSpeech.ssml).toBeDefined();
    expect(launchResponse['response'].outputSpeech.ssml).toMatchSnapshot();

    const loginResponse = await alexa.utter(`My name is ${testConstants.userName}`);
    expect(loginResponse['response'].outputSpeech.ssml).toBeDefined();
    expect(loginResponse['response'].outputSpeech.ssml).toMatchSnapshot();

    const usernameConfirmationResponse = await alexa.utter('yes');
    expect(usernameConfirmationResponse['response'].outputSpeech.ssml).toBeDefined();
    expect(usernameConfirmationResponse['response'].outputSpeech.ssml).toMatchSnapshot();

    const helpResponse2 = await alexa.intend('AMAZON.HelpIntent');
    expect(helpResponse2['response'].outputSpeech.ssml).toBeDefined();
    expect(helpResponse2['response'].outputSpeech.ssml).toMatchSnapshot();

    const joinResponse = await alexa.utter('join channel test');
    expect(joinResponse['response'].outputSpeech.ssml).toBeDefined();
    expect(joinResponse['response'].outputSpeech.ssml).toMatchSnapshot();

    const joinConfirmResponse = await alexa.utter('yes');
    expect(joinConfirmResponse['response'].outputSpeech).toBeUndefined();
    expect(joinConfirmResponse['response'].shouldEndSession).toBeDefined();
    expect(joinConfirmResponse['response'].shouldEndSession).toBeTruthy();

    const helpResponse3 = await alexa.intend('AMAZON.HelpIntent');
    expect(helpResponse3['response'].outputSpeech.ssml).toBeDefined();
    expect(helpResponse3['response'].outputSpeech.ssml).toMatchSnapshot();
}, 15000);

How to simulate device permissions

Hi,

I am trying to test an alexa skill that uses device permission to get user's device address. How can i simulate that using virtual-alexa.

Thankyou,
Kranthi.

Working with typescript (using ts-jest)

I'm setting up a Typescript testing pipeline using jest (ts-jest) for snapshot testing. The way ts-jest works is that it preprocesses your ts files into js for testing, i.e. in my package.json

  "jest": {
    "transform": {
      "^.+\\.tsx?$": "ts-jest"
    },
    "testMatch": [
      "**/__tests__/*.+(ts|tsx|js)"
    ],
    "moduleFileExtensions": [
      "ts",
      "tsx",
      "js",
      "jsx",
      "json",
      "node"
    ]
  }

This was working with BSTAlexa with some tweaks to how it was interpreting the intent schema, but switching to va.VirtualAlexa I'm now getting a problem where SkillInteractor.js cannot find the module src/index.js, i.e.

    Cannot find module '/my-skill-folder/src/index.js' from 'ModuleInvoker.js'

      at Resolver.resolveModule (node_modules/jest-resolve/build/index.js:169:17)
      at Function.invokeHandler (node_modules/virtual-alexa/lib/src/ModuleInvoker.js:10:31)
      at LocalSkillInteractor.invoke (node_modules/virtual-alexa/lib/src/LocalSkillInteractor.js:13:50)
      at LocalSkillInteractor.<anonymous> (node_modules/virtual-alexa/lib/src/SkillInteractor.js:77:39)

This is because index.js does not exist? Not exactly sure

Expose the lambda context in the virtual alexa filter() method

The ability to tweak the lambda context as well in VirtualAlexa (filter function) would be a nice feature. Currently it only offers the request (which is the alexa payload in case of BST tools).

Use case

Our lambda is exposed with the AWS API GW (with Serverless) and it serves multiple skills, both on Alexa and Google Assistant and it relies on the request path to figure out the platform and the skill.

We use the BST proxy for debugging, and the Virtual Alexa for unit testing. The BST proxy puts the request path and the query parameters into the lambda context (request attribute). We need to mock that with VirtualAlexa in the unit tests.

But there is a bigger picture...

It would be really cool if the BST tools would offer an option to simulate the event and context format of the API Gateway Lambda Proxy. This is default with Serverless or ClaudiaJS installations. The important difference is that Amazon uses the event to pass in information about the HTTP request (BST sends the payload in the event). Also Amazon's callback response format is different too: they want you to wrap the response into a simple object.

The bottom line is this: the consolidation of the lambda event and context structures would make the code simpler for lambda projects exposed with Serverless or ClaudiaJS (anything that uses the default API GW Lambda proxy format).

To illustrate the "lambda context hell" we are in, this is what we call first thing in the lambda handler to sniff out the environment:

import * as fs from "fs";

/**
 * Pick out the data we need, depending on the environment we run in.
 * Currently we support AWS default lambda proxy, bst proxy (real-time debugging) 
 * and VirtualAlexa (unit testing).
 *
 * Additionally attach a function to build the response in a way that is specific to the
 * lambda environment.
 *
 * @param event
 * @param context
 * @returns {any}
 */
export function translateLambdaContext(event: any, context: any): any {
    if (!event || !context) { return {}; }

    let eventContext = {};

    if (event.requestContext) {
        eventContext = lambdaProxyContext(event, context);
    } else if (context.request) {
        eventContext = bstContext(event, context);
    } else if (event.testContext) {
        eventContext = virtualBstContext(event, context);
    }

    return eventContext;
}

/**
 * BST format
 *
 * @param lambdaEvent
 * @param lambdaContext
 * @returns {any}
 */
function bstContext(lambdaEvent: any, lambdaContext: any): any {
    const [path] = lambdaContext.request.url.split("?");

    const params = Object.assign({}, parsePath(path));
    params.rawBody = JSON.stringify(lambdaEvent);
    params.body = lambdaEvent;

    params.buildResponse = (code: number, result: any): any => {
        return result;
    };

    return params;
}

/**
 * Virtual BST format
 *
 * The path is piggybacked on the payload (lambda event) for now.
 *
 * @param lambdaEvent
 * @param lambdaContext
 * @returns {any}
 */
function virtualBstContext(lambdaEvent: any, lambdaContext: any): any {
    const path = lambdaEvent.testContext.path;

    const params = Object.assign({}, parsePath(path));
    params.rawBody = JSON.stringify(lambdaEvent);
    params.body = lambdaEvent;

    params.buildResponse = (code: number, result: any): any => {
        return result;
    };

    return params;
}

/**
 * Basic AWS API Gateway lambda proxy format
 *
 * @param lambdaEvent
 * @returns {any}
 */
function lambdaProxyContext(lambdaEvent: any, lambdaContext: any): any {
    if (!lambdaEvent.path) {
        return {};
    }

    const path = lambdaEvent.path;

    const params = Object.assign({}, parsePath(path));
    params.rawBody = lambdaEvent.body;
    params.body = JSON.parse(lambdaEvent.body);
    params.headers = lambdaEvent.headers;
    params.alexaApplicationId =
        lambdaEvent.queryStringParameters ? 
           lambdaEvent.queryStringParameters.alexaApplicationId : undefined;

    params.buildResponse = (code: number, result: any): any => {
        return {
            statusCode: code,
            body: JSON.stringify(result)
        };
    };

    return params;
}

/**
 * This follows the current path convention: .../dev/apps/{appId}/run/{platform}
 *
 * @param {string} path
 * @returns {any}
 */
function parsePath(path: string): any {
    const params: any = {};

    const pathParts: any = path.split("/");
    params.platform = pathParts.pop();
    pathParts.pop(); // "/run/"
    params.appId = pathParts.pop();

    return params;
}

Support DynamoDB.DeleteItem in DynamoDB.mock()

The DynamoDB mocking already supports CreateTable, PutItem and GetItem. It would be useful to have support for DeleteItem as well.

The ASK SDK DynamoDB Persistence Adapter supports delete. If a skill uses this adapter, virtual-alexa cannot be used to test this delete functionality without a workaround.

https://github.com/alexa/alexa-skills-kit-sdk-for-nodejs/blob/2.0.x/ask-sdk-dynamodb-persistence-adapter/lib/attributes/persistence/DynamoDbPersistenceAdapter.ts

One possible workaround is to use nock inside the tests to mock the call to DeleteItem (which is what the bespoken DynamoDB. class does internally for the methods it does implement).

It would be much nicer if DeleteItem was supported within DynamoDB.class

If you approve this feature, I can submit a PR for this work.

Support "open", "ask", "launch", "talk to" and "tell" for utterances

When running unit tests, any utterance that starts with these words should be automatically treated as a launch request.

Additionally, if it starts with these words, we should look for the pattern:
<ask|open|launch|talk to|tell> <invocation_name> to <utterance>

This can be turned into a regex like so:

/^(?:ask|open|launch|talk to|tell) .* to (.*)/i

If the regex matches, we use the captured part after the "to" as the utterance, to be fed to the utter method (as opposed to calling a launch request).

feat(skill-interactor): handle multiple (concurrent) device ids

I'm wondering how difficult it may be to simulate multiple users at once. When the SkillInteractor is instantiated, what would have to change for it to considered a different user?

Maybe alexa.launch() should have an optional deviceId parameter to launch the simulator with a second user? Not sure if this would be easy/possible to implement, but I'd be happy to help!

Intent confirmation for non-delegated dialogs

Currently, Dialog.ConfirmIntent does not work for alexa skills using dialogs without delegating to Alexa.

For example...

myHandler.js

const MyIntentHandler = {
    canHandle(input) {
       return (
           AlexaUtils.getRequestType(input) === 'IntentRequest' &&
           AlexaUtils.getIntent(input).name === 'MyIntent'
        )
    },
    handle(input) {
        if (intent.confirmationStatus !== 'CONFIRMED') {
            if (intent.confirmationStatus !== 'DENIED') {
                return input.responseBuilder
                    .speak('Do yo want to do this?')
                    .addConfirmIntentDirective(intent)
                    .getResponse()
             }
             return input.responseBuilder.speak("Ok, I won't do this.").getResponse()
        }
        return input.responseBuilder.speak('Ok. I will do this').getResponse()
    }
}

myHandler.test.js

it('should respond with a denial message on intent deny', async () => {
   await alexa.utter('Do my intent')
   const { response } = await alexa.utter('no')
   expect(response.outputSpeech.ssml).to.contain("Ok, I won't do this.")
})

Expected Outcome: The intent's confirmationStatus to DENIED
Actual Outcome: The intent's confirmationStatus stays undefined

I think the cause of this is

&& this._confirmationStatus === ConfirmationStatus.NONE

It looks like the confirmationStatus is only set to NONE if you use Dialog.Delegate first, so the confirmationStatus is never updated in this case.

Alexa code in Google Cloud

Hello,

We have migrated our Alexa aws code to Google Cloud. Can you please help me how to initialize ?

.handler('./src/AlexaSkill.LaunchRequestHandler')
.applicationID('https://alexa-dev.cloudfunctions.net/mapp')
.interactionModelFile('./model.json')
.create();

describe('Launch Intent', () => {
it('Returns the correct prompt', async () => {
const result = await alexa.launch();
console.log(result);
//const result = await alexa.utter('ask mybeta');
});
});

I am getting the following error:

  1. Launch Intent
    Returns the correct prompt:
    TypeError: lambdaFunction is not a function

Thanks,
Angavai S.

Relative paths not handled correctly for specifying handler

If the handler is specified starting with "." or ".." it does not function correctly.

For example, if the user enters their path to their Lambda as "../index.handler", it will not find that file correctly.

Having looked at the code, the fix should be straightforward - we should just look for the last "." in the handler name as opposed to splitting on any of them, as is done here:
https://github.com/bespoken/virtual-alexa/blob/master/src/ModuleInvoker.ts#L5

Charts on Dashboard issues

Build = dev-70

General:
Main Issue (Source, Validation and Check Stats ):
1.Avg response time always shows 0.00 even when the chart in Check Stats shows data

Small Issues / Enhancements:

  1. If the skill is not integrated it shouldn't show the charts empty we should display a message to integrate the skill to see.
  2. The chart "Daily Events" on Source page and Validation is misleading the name describes the number of events per day but the charts show avg number of events per hour. Talking to Pier about it he tells me that it is possible to have 168 blocks in that chart, in one hand the image according to pier won't break but I got the feeling that it won't be pretty either we might want to check that chart and if we want to keep it make a maximum amount of blocks allow.

Source Page:

  1. drop down is showing

Validation Page:

  1. Numbers show be align (Total events, Unique users, Total errors)
  2. "Users" in the label "Unique Users" should be lower case

Check Stats Page:

  1. Labels for charts titles don't have the same size
  2. "Number of events per Intent" and "Daily events" are not align when "Source up time" chart is not enable
  3. The charts of the left don't show the same days
  4. Obscure replicated issue: in 3 occasions others intents appear in the "Number of events per intent" chart (talk to Juan about it he will do some digging on that)

Enhancement:
The Space of the "Source up time" chart should be in the bottom right displaying a message to enable monitoring to see the chart.

Add reply wrapper, so that emulator can easily be used with assertion frameworks

Overview
Create a reply wrapper, which adds convenience methods to the reply payload for testing and other uses.

Rationale
We prefer a reply wrapper to building on our own expectations/assertions because there are already many fine assertion frameworks out there. However, making it easy to get data from the JSON response for Alexa is very useful

Example
Here is an example of how it might work that uses Jest:

reply = ReplyAdapter.tranform(reply); // Adds helper methods
expect(reply.ssml()).toContain("<audio src=\"http://jpk.com"\" />");
expect(reply.plainText()).toBeUndefined();

Useful methods

attr(key) - The session attribute - undefined if it does not exist
card() - The card - an object unto itself with properties for image
plainText() - The plain text reply - if it exists, undefined if not
reprompt() - The reprompt text - if it exists, undefined if not
ssml() - The SSML payload - undefined if it does not exist

This is a subset - we can identify more.

Influenced by this issue:
jovotech/jovo-framework#39

Dialog Flow broken

Dialog Flow does not delegate to code between filling slots.

VIRTUAL ALEXA -

    let resp = await alexa.utter('my intent')
    expect(resp.prompt).to.equal('What can we help you with?')
    resp = await alexa.utter('slot 1 answer')
    expect(resp.prompt).to.equal('Which part?')
    resp = await alexa.utter('1');

SERVER OUTPUT - VIRTUAL ALEXA

The log shows 2 inputs, one where dialogState == null and one where dialogState == 'COMPLETED' without any state in between:

2018-12-10 15:08:33 DEBUG class IntentRequest {
    class Request {
        type: IntentRequest
        requestId: amzn1.echo-external.request.8e703029-9479-4cb4-90fa-8c1a400afa45
        timestamp: 2018-12-10T20:08:33Z
        locale: en-US
    }
    dialogState: null
    intent: class Intent {
        name: Intent1
        slots: {slot1=class Slot {
            name: slot1
            value: null
            confirmationStatus: NONE
            resolutions: null
        }, slot2=class Slot {
            name: slot2
            value: null
            confirmationStatus: NONE
            resolutions: null
        }}
        confirmationStatus: null
    }
}

Optional[class Response {
    outputSpeech: null
    card: null
    reprompt: null
    directives: [class DelegateDirective {
        class Directive {
            type: Dialog.Delegate
        }
        updatedIntent: class Intent {
            name: Intent1
            slots: {slot1=class Slot {
                name: slot1
                value: null
                confirmationStatus: NONE
                resolutions: null
            }, slot2=class Slot {
                name: slot2
                value: null
                confirmationStatus: NONE
                resolutions: null
            }}
            confirmationStatus: null
        }
    }]
    shouldEndSession: null
    canFulfillIntent: null
}]

2018-12-10 15:08:33 DEBUG - class IntentRequest {
    class Request {
        type: IntentRequest
        requestId: amzn1.echo-external.request.3720b4bf-225a-45d0-908a-f340bffc503d
        timestamp: 2018-12-10T20:08:33Z
        locale: en-US
    }
    dialogState: COMPLETED
    intent: class Intent {
        name: Intent1
        slots: {slot1=class Slot {
            name: slot1
            value: slot 1 answer
            confirmationStatus: NONE
            resolutions: class Resolutions {
                resolutionsPerAuthority: [class Resolution {
                    authority: amzn1.er-authority.echo-sdk.amzn1.ask.skill.1d4e6648-fdad-457f-831a-6124d5d8f032.slot1s
                    status: class Status {
                        code: ER_SUCCESS_MATCH
                    }
                    values: [class ValueWrapper {
                        value: class Value {
                            name: Stuck
                            id: null
                        }
                    }]
                }]
            }
        }, slot2=class Slot {
            name: slot2
            value: 1
            confirmationStatus: NONE
            resolutions: null
        }}
        confirmationStatus: NONE
    }
}
Optional[class Response {
    outputSpeech: class SsmlOutputSpeech {
        class OutputSpeech {
            type: SSML
        }
        ssml: <speak>Completed</speak>
    }
    card: null
    reprompt: null
    directives: null
    shouldEndSession: null
    canFulfillIntent: null
}]

When I test this through the alexa developer portal, I get multiple states in between and the dialog starting state is STARTED not null:

2018-12-10 19:53:06 DEBUG - class IntentRequest {
    class Request {
        type: IntentRequest
        requestId: amzn1.echo-api.request.555d79fa-6add-4683-a977-f07897e022c9
        timestamp: 2018-12-10T19:53:05Z
        locale: en-US
    }
    dialogState: STARTED
    intent: class Intent {
        name: Intent1
        slots: {slot2=class Slot {
            name: slot2
            value: null
            confirmationStatus: NONE
            resolutions: null
        }, slot1=class Slot {
            name: slot1
            value: null
            confirmationStatus: NONE
            resolutions: null
        }}
        confirmationStatus: NONE
    }
}
Optional[class Response {
    outputSpeech: null
    card: null
    reprompt: null
    directives: [class DelegateDirective {
        class Directive {
            type: Dialog.Delegate
        }
        updatedIntent: class Intent {
            name: Intent1
            slots: {slot2=class Slot {
                name: slot2
                value: null
                confirmationStatus: NONE
                resolutions: null
            }, slot1=class Slot {
                name: slot1
                value: null
                confirmationStatus: NONE
                resolutions: null
            }}
            confirmationStatus: NONE
        }
    }]
    shouldEndSession: null
    canFulfillIntent: null
}]

2018-12-10 19:53:36 DEBUG - class IntentRequest {
    class Request {
        type: IntentRequest
        requestId: amzn1.echo-api.request.6bbc3b8d-8931-46b1-a78b-87c49e16b82d
        timestamp: 2018-12-10T19:53:35Z
        locale: en-US
    }
    dialogState: IN_PROGRESS
    intent: class Intent {
        name: Intent1
        slots: {slot2=class Slot {
            name: slot2
            value: null
            confirmationStatus: NONE
            resolutions: null
        }, slot1=class Slot {
            name: slot1
            value: slot 1 answer
            confirmationStatus: NONE
            resolutions: class Resolutions {
                resolutionsPerAuthority: [class Resolution {
                    authority: amzn1.er-authority.echo-sdk.amzn1.ask.skill.1d4e6648-fdad-457f-831a-6124d5d8f032.slot1s
                    status: class Status {
                        code: ER_SUCCESS_MATCH
                    }
                    values: [class ValueWrapper {
                        value: class Value {
                            name: slot 1 answer 1
                            id: d418720232c238736820b1e376d3e70b
                        }
                    }, class ValueWrapper {
                        value: class Value {
                            name: slot 1 answer 2
                            id: 0d9c296293117e8ef114e26d5c3720a8
                        }
                    }]
                }]
            }
        }}
        confirmationStatus: NONE
    }
}
Optional[class Response {
    outputSpeech: null
    card: null
    reprompt: null
    directives: [class DelegateDirective {
        class Directive {
            type: Dialog.Delegate
        }
        updatedIntent: class Intent {
            name: Intent1
            slots: {slot2=class Slot {
                name: slot2
                value: null
                confirmationStatus: NONE
                resolutions: null
            }, slot1=class Slot {
                name: slot1
                value: slot 1 answer
                confirmationStatus: NONE
                resolutions: class Resolutions {
                    resolutionsPerAuthority: [class Resolution {
                        authority: amzn1.er-authority.echo-sdk.amzn1.ask.skill.1d4e6648-fdad-457f-831a-6124d5d8f032.slot1s
                        status: class Status {
                            code: ER_SUCCESS_MATCH
                        }
                        values: [class ValueWrapper {
                            value: class Value {
                                name: slot 1 answer 1
                                id: d418720232c238736820b1e376d3e70b
                            }
                        }, class ValueWrapper {
                            value: class Value {
                                name: slot 1 answer 2
                                id: 0d9c296293117e8ef114e26d5c3720a8
                            }
                        }]
                    }]
                }
            }}
            confirmationStatus: NONE
        }
    }]
    shouldEndSession: null
    canFulfillIntent: null
}]

2018-12-10 19:53:42 DEBUG - class IntentRequest {
    class Request {
        type: IntentRequest
        requestId: amzn1.echo-api.request.2ee425bc-6aa4-4586-ae3a-4512359f452c
        timestamp: 2018-12-10T19:53:42Z
        locale: en-US
    }
    dialogState: IN_PROGRESS
    intent: class Intent {
        name: Intent1
        slots: {slot2=class Slot {
            name: slot2
            value: 1
            confirmationStatus: NONE
            resolutions: null
        }, slot1=class Slot {
            name: slot1
            value: slot 1 answer
            confirmationStatus: NONE
            resolutions: class Resolutions {
                resolutionsPerAuthority: [class Resolution {
                    authority: amzn1.er-authority.echo-sdk.amzn1.ask.skill.1d4e6648-fdad-457f-831a-6124d5d8f032.slot1s
                    status: class Status {
                        code: ER_SUCCESS_MATCH
                    }
                    values: [class ValueWrapper {
                        value: class Value {
                            name: slot 1 answer 1
                            id: d418720232c238736820b1e376d3e70b
                        }
                    }, class ValueWrapper {
                        value: class Value {
                            name: slot 1 answer 2
                            id: 0d9c296293117e8ef114e26d5c3720a8
                        }
                    }]
                }]
            }
        }}
        confirmationStatus: NONE
    }
}

Optional[class Response {
    outputSpeech: null
    card: null
    reprompt: null
    directives: [class DelegateDirective {
        class Directive {
            type: Dialog.Delegate
        }
        updatedIntent: class Intent {
            name: Intent1
            slots: {slot2=class Slot {
                name: slot2
                value: 1
                confirmationStatus: NONE
                resolutions: null
            }, slot1=class Slot {
                name: slot1
                value: slot 1 answer
                confirmationStatus: NONE
                resolutions: class Resolutions {
                    resolutionsPerAuthority: [class Resolution {
                        authority: amzn1.er-authority.echo-sdk.amzn1.ask.skill.1d4e6648-fdad-457f-831a-6124d5d8f032.slot1s
                        status: class Status {
                            code: ER_SUCCESS_MATCH
                        }
                        values: [class ValueWrapper {
                            value: class Value {
                                name: slot 1 answer 1
                                id: d418720232c238736820b1e376d3e70b
                            }
                        }, class ValueWrapper {
                            value: class Value {
                                name: slot 1 answer 2
                                id: 0d9c296293117e8ef114e26d5c3720a8
                            }
                        }]
                    }]
                }
            }}
            confirmationStatus: NONE
        }
    }]
    shouldEndSession: null
    canFulfillIntent: null
}]

2018-12-10 19:53:42 DEBUG - class IntentRequest {
    class Request {
        type: IntentRequest
        requestId: amzn1.echo-api.request.00628a6b-82a8-466c-ac19-975ad6c81130
        timestamp: 2018-12-10T19:53:42Z
        locale: en-US
    }
    dialogState: COMPLETED
    intent: class Intent {
        name: Intent1
        slots: {slot1=class Slot {
            name: slot1
            value: slot 1 answer
            confirmationStatus: NONE
            resolutions: class Resolutions {
                resolutionsPerAuthority: [class Resolution {
                    authority: amzn1.er-authority.echo-sdk.amzn1.ask.skill.1d4e6648-fdad-457f-831a-6124d5d8f032.slot1s
                    status: class Status {
                        code: ER_SUCCESS_MATCH
                    }
                    values: [class ValueWrapper {
                        value: class Value {
                            name: slot 1 answer 1
                            id: d418720232c238736820b1e376d3e70b
                        }
                    }, class ValueWrapper {
                        value: class Value {
                            name: slot 1 answer 2
                            id: 0d9c296293117e8ef114e26d5c3720a8
                        }
                    }]
                }]
            }
        }, slot2=class Slot {
            name: slot2
            value: 1
            confirmationStatus: NONE
            resolutions: null
        }}
        confirmationStatus: NONE
    }
}
Optional[class Response {
    outputSpeech: class SsmlOutputSpeech {
        class OutputSpeech {
            type: SSML
        }
        ssml: <speak>Completed</speak>
    }
    card: null
    reprompt: null
    directives: null
    shouldEndSession: null
    canFulfillIntent: null
}]

Do you interest to support Amazon Presentation Language?

Overview

If I try to implement tests with Amazon Presentation Language, I can't because of this test libraries interface. supportedInterface property is built in Device class, but there are no method to set Alexa.Presentation.APL.RenderDocument directive. And I can't access setter for supportedInterface (I think it's correct design but I'm worried because of this design).

What do you think to support Amazon Presentation Language testing? If you interest, I'll implement about that and send PR.

Howto: use a locally hosted lambda

Hi

I have a lambda hosted in localstack. How can I use that as the handler for running virtual-alexa? (This is not a JS based lambda, so cannot use the normal handler syntax).

Since it is not really a hosted endpoint, I am unable to use the url property.

AMAZON.LITERAL utterances parsing

Hi, I'm trying to use the lib, but I have a literal which doesn't seem to be properly parsed by the bespoken/virtual-core lib. Any idea how to get that processed correctly?

PS: I love this lib and I really want to use it. I'm pretty sure the problem is the format of the LITERAL slot with the pipe, but I'd rather ask you guys first.

If you have an idea and not time, just throw it in I'll do my best to get a PR up for that.

Intent:

{
          "name": "LiteralInput",
          "slots": [
            {
              "name": "literal",
              "type": "AMAZON.LITERAL"
            }
          ],
          "samples": [
            "{test | literal}",
            "{test2 | literal}",
          ]
}

Error:

Error: Invalid schema - not slot: test | literal for intent: LiteralInput
      at SamplePhraseTest.checkSlots (node_modules/virtual-core/lib/src/SampleUtterances.js:128:23)
      at new SamplePhraseTest (node_modules/virtual-core/lib/src/SampleUtterances.js:78:36)
      at SamplePhrase.matchesUtterance (node_modules/virtual-core/lib/src/SampleUtterances.js:54:16)
      at Utterance.matchIntent (node_modules/virtual-core/lib/src/Utterance.js:49:41)
      at new Utterance (node_modules/virtual-core/lib/src/Utterance.js:7:14)
      at LocalSkillInteractor.SkillInteractor.spoken (node_modules/virtual-alexa/lib/src/impl/SkillInteractor.js:64:25)
      at VirtualAlexa.utter (node_modules/virtual-alexa/lib/src/core/VirtualAlexa.js:53:33)

resolutions information for custom slot types

I want to test my code which relies on the resolutions information for custom slot types.
As far as I understood, virtual-alexa does not yet support this.
Is my understanding correct?
If yes, is this feature planned in virtual-alexa?
Would you accept a PR?

session is undefined when using filter with addRequestInterceptors

Description:

the filter method doesn't work with .addRequestInterceptors(theRequestInterceptor) as the replaced value of session is not set. Filter works fine for setting values that will appear in each intent handler (as recommended by the VirtualAlexa readme) but if I had to guess I would say that the "filtered value" is only passed through to the intent handlers and not to the interceptor.

Environment:

  • Version:
  • OS: Windows 10
  • Node version: node 8.11.3

Steps To Reproduce

Steps to reproduce the behavior:

  1. set handlerInput.requestEnvelope.session.user.accessToken to a value by using filter
  2. utilize .addRequestInterceptors by passing in an object of type Alexa.RequestInterceptor as defined in ask-sdk
  3. set a breakpoint in that object's process method and check out what the value is

Expected behavior

The value set in filter will be the value that gets sent into the interceptor

Actual behavior

The value set in filter is undefined in the interceptor's process function

Code example

The interceptor

export const theRequestInterceptor : Alexa.RequestInterceptor = {
    async process(handlerInput : Alexa.HandlerInput) {
            const accessToken = handlerInput.requestEnvelope.session ? handlerInput.requestEnvelope.session.user.accessToken : undefined;
            debugger;
        }
    }
};

The code that uses it:

Alexa.SkillBuilders.custom()
    .addRequestHandlers(
        ...allHandlers
    )
    .addRequestInterceptors(theRequestInterceptor)
    .addErrorHandlers(ErrorHandler)
    .lambda();

The test code:

            const utteranceResponse = await alexa
                .filter(requestJson => {
                    requestJson.session.user.accessToken = token2use;
                })
                .utter("help");

AlexaSkill ID verification failed.

I'm running into an issue where the AlexaSkill ID verification is failing in my application. Here is the stack

com.amazon.ask.exception.AskSdkException: AlexaSkill ID verification failed.
        at com.amazon.ask.CustomSkill.invoke(CustomSkill.java:64) ~[ask-sdk-core-2.9.1.jar:na]
        at com.amazon.ask.CustomSkill.invoke(CustomSkill.java:53) ~[ask-sdk-core-2.9.1.jar:na]
        at com.amazon.ask.servlet.SkillServlet.doPost(SkillServlet.java:106) ~[classes/:na]

here is my test code

const vax = require("virtual-alexa");

it('launches successfully', async (done) => {
  const alexa = vax.VirtualAlexa.Builder()
    .skillURL('http://127.0.0.1:8080/')
    .interactionModelFile('./interaction.json')
    .create();
  alexa.utter('ask me')
    .then((payload) => console.log(payload))
    .catch((error) => console.log('e', error));
  done();
});

I've also tried doing the .launch() instead of the utter and I get the same result.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.