This is a bot for our e-comm demo A bot that will take care of all of your shopping needs in one go.
- Demo: https://ns-botlibrary-ecomm.uksouth.cloudapp.azure.com/ecomm/index.html
- Rasa X: https://ns-botlibrary-ecomm.uksouth.cloudapp.azure.com/
- Kibana: https://ns-botlibrary-ecomm.uksouth.cloudapp.azure.com/bot-analytics/
For username and password for both contact the product owner
- Login n Logout
- Check All Orders
- Show More
- Product Return/Replace
- Product Inquiry
- Personalize shopping and so on....
- Python 3.7 If you don't have Python3.7, then consider installing conda with python3.7. Once you have installed it, activate the base environment and then run the following instructions.
- Docker
- Docker Compose
- Helm
- Kubernetes
- Azure CLI
- Pycharm: Mark
./src
as content root - Others: Set this environment variable
export PYTHONPATH=./src
Use the following command
- A Makefile with various helpful targets. E.g.,
# to install system level dependencies make bootstrap # Configuring NS Private PyPi repo # Get username and password from the project Admin poetry config http-basic.neuralspace <private-pypi-username> <private-pypi-password> # install virtual environment and project level dependencies make install # run unit tests make test # run black code formatting and isort make format # to run flake8 and validate code formatting make lint
First do:
make install
To update packages After that build docker image, Run docker-compose up and initialize elastic search first time.. Make sure to upload data. To do that: Run following command
python src/dash_ecomm/elastic_search_data_upload.py
This command will run the script to upload data onto elasticsearch node
When you run docker-compose up, kibana will start within 5mins You can check it on:
http://localhost:5601
Then, to setup image run:
docker-compose build
docker-compose run rasa-x poetry run rasa train
Then, start docker-compose.yml
to start all servers:
docker-compose up
config.yml
, credentials.yml
and enpoints.yml
get added to the docker image.
Make sure to rebuild the image after making changes to these files.
Once all the containers are up go to http://localhost:7000
Models trained using docker-compose won't work locally. If you are running things locally you have to train a model locally by following the instructions in the next section.
poetry run rasa train --config configs/local/config.yml
Then, to run, first set up your action server in one terminal window:
poetry run rasa run actions
In another window, run the duckling server (for entity extraction):
docker run -p 8000:8000 rasa/duckling
In another window, run the callback server for reminders and scheduled requests:
poetry run python -m dash_ecomm.callback_server
Then talk to your bot by running:
poetry run rasa run --enable-api --cors "*" --endpoints configs/local/endpoints.yml --credentials configs/local/credentials.yml
Open the file in demo/local/index.html
poetry run rasa shell --debug --endpoints endpoints-local.yml
Note that --debug
mode will produce a lot of output meant to help you understand how the bot is working
under the hood. To simply talk to the bot, you can remove this flag.
Note that there are two copies of the demo page. One at demo/local
and another one at demo/prod
.
If you want any changes to reflect on prod then update the prod files as well
Config files for prod are kept in configs/prod
. Make sure to change these files to reflect in prod.
data/stories
- contains stories
data/nlu
- contains NLU training data
data/rules.yml
- contains rules
actions.py
- contains custom action/api code
domain.yml
- the domain file, including bot response templates
config.yml
- training configurations for the NLU pipeline and policy ensemble
tests/test_stories.yml
- end-to-end test stories
You can test the bot on test conversations by running rasa test
.
This will run end-to-end testing on the conversations in tests/test_stories.yml
.
Note that if duckling isn't running when you do this, you'll see some failures.
- Start development with initializing rasa bot
- While, developing bot first start with creating intents.
- Now, start
Rasa X
and start interactive training - Add
utters
as needed directly todomain.yml
instead of usingRasa X
for adding them - Add
stories
directly into 'stories' or into their respective files - Add
intents
directly intonlu.yml
or into their respective files - Add
rules
directly intorules.yml
or into their respective files - Why to do this?
- Rasa x makes formating different and its not clean at all
- This will make flow bad and when added too many use cases it will look mess
- Add
actions
as needed while doing interactive training - Make sure to follow clean code methodology
- Commit code every day even if you did very less addition
These steps need to followed while setting up the cluster for the first time
-
Create a namespace
kubectl apply -f deployment/namespace.yml
-
Create a static IP
az network public-ip create --resource-group <your-cluster-resource-group> --name <some-name-for-your-ip> --sku Standard --allocation-method static --query publicIp.ipAddress -o tsv
-
Create an Ingress Controller Make sure to update the
YOUR_STATIC_IP
andDNS_LABEL
variableshelm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm install nginx-ingress ingress-nginx/ingress-nginx \ --namespace dash-ecomm \ --set controller.replicaCount=2 \ --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \ --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \ --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \ --set controller.service.loadBalancerIP="YOUR_STATIC_IP" \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"="ns-botlibrary-ecomm"
-
Test if you have an external IP
kubectl --namespace dash-ecomm get services -o wide -w nginx-ingress-ingress-nginx-controller
-
Install a certificate manager if you don't have one already
Label the cert-manager namespace to disable resource validation
kubectl label namespace dash-ecomm cert-manager.io/disable-validation=true
Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io helm repo update
Deploy a certificate manager on the cluster
helm install \ cert-manager \ --namespace dash-ecomm \ --version v0.16.1 \ --set installCRDs=true \ --set nodeSelector."beta\.kubernetes\.io/os"=linux \ jetstack/cert-manager
-
Deploy a CA Issuer
kubectl apply -f deployment/cluster-issuer.yml
-
Create an SSL Certificate
kubectl apply -f deployment/certificates.yml
-
Create a username and password to protect the demo page
# Create a username and password htpasswd -c auth <some-username> # Create a kubernetes secret to store the credentials kubectl -n dash-ecomm create secret generic basic-auth --from-file=auth
-
Deploy RabbitMQ, Kibana, and Elasticsearch for the first time
- RabbitMQ
- Elasticsearch (single node)
- Kibana
make deploy-es-kib-rmq
-
Deploy all services for the first time This deploys the following services:
- Rasa Actions Server
- Rasa Callback Server
- Duckling Server
- Rasa X
- Ecomm Demo Page
- Ingress for Rasa X, Rasa Core, Demo
- Rasa event consumer (for logging and analytics)
make deloy-all
Staging deployment happens in the CI pipeline. Every time we merge something with master, a new version of the bot is deployed.