Download: sudo docker pull hausss/edgent-demo
Run: sudo docker run --net=host -it hausss/edgent-demo bash
note: `--net=host` lets the docker container access the computer's network (so make sure you aren't already running Zookeeper or Kafka)
Leave this terminal running and open two more terminals with docker exec -it <container id> bash
.
You can find the container id with sudo docker ps
To start Zookeeper: ./kafka/bin/zookeeper-server-start.sh ./kafka/config/zookeeper.properties
To start Kafka: ./kafka/bin/kafka-server-start.sh ./kafka/config/server.properties
To get a text editor: apt install nano
cd edgent
HelloEdgent.java
run with./run-helloEdgent.sh
TempSensorApp.java
run with./run-tempSensorApp.sh
TempSensorPubApp.java
run with./run-tempSensorPubApp.sh
- is a simple Hello Edgent program
- is an application that polls from a simulated temperature generator
- takes 2. and publishes the temperatures to Kafka
This project includes a maven wrapper script to eliminate the need to manually download and install maven.
Edit your edgent java files which are located in /lab/edgent/src/main/java/com/mycompany/app
Then (back in the edgent directory) to build the jar file run:
./mvnw clean package
This will package up your code and place it into the uber-jar
KafkaClient.java
defines three topics.
OPT_TOPIC = kafkaTempsTopic
OPT_TOPIC_2 = kafkaHighTempTopic
OPT_TOPIC_3 = kafkaAverageTopic
To publish to one of these three topics, uncomment the following lines of code in TempSensorPubApp.java
:
Map<String,Object> config2 = newConfig();
KafkaProducer kafka2 = new KafkaProducer(topology, () -> config2);
kafka2.publish(YOUR_STREAM_HERE, options.get(OPT_TOPIC_2));
Replace YOUR_STREAM_HERE
with the stream object you want to hook up to Kafka.
You may need to create the topics with the following command:
./kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 -replication-factor 1 --partitions 1 --topic TOPIC_NAME
Open a terminal with docker exec -it <container id> bash
.
To start Spark Streaming:
./spark/bin/spark-submit --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.3.0 ./spark/tempSummary.py
This script will bin received temperatures into one of three categories: LOW
, HIGH
, or FINE
, and display the count of each.
To access the other two Kafka topics, you can run the following commands:
kafkaHighTempTopic
:
./spark/bin/spark-submit --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.3.0 ./spark/highestDisplay.py
kafkaAverageTopic
:
./spark/bin/spark-submit --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.3.0 ./spark/averageDisplay.py
These will simply print out data as it arrives. They will not transform the data any further.
The Raspberry Pi's are living on their own wireless network.
You will need to connect to the lab network to interact with the Pi and to copy data over.
You will need to connect to the normal wireless network the first time you build your jar - Maven will need to acquire its dependencies. You may also need to reconnect if Maven decides it wants to download them again.
Connect to your Raspberry Pi. They are named A through G, and can be found at IP addresses 192.168.0.201, 192.168.0.202, ..., 192.168.0.207
The password to the pi is edgecomp
Run these two shell commands immediately:
export ZOOKEEPER_SERVER=192.168.0.101:2181
export BOOTSTRAP_SERVER=192.168.0.101:9092
The application will use these environemnt variables to locate the central Kafka instance.
The Pis are preloaded with an Edgent jar.
cd to edgent
Execute Edgent with ./run-tempSensorPubApp.sh
Develop locally like before. (build a new jar.)
Do this step from inside /lab/edgent
in your Docker container.
Then you can copy the generated target/*-uber.jar
to the edge device and then run it with: scp target/*-uber.jar [email protected]:~/edgent/target
where X
is the last digit of your pi's IP address.
Delete all containers: docker rm $(docker ps -a -q)
Delete all images: docker rmi $(docker images -q)
Copy a file to docker with sudo docker cp <local file> <container id>:<container filepath>
You may or may not need to run docker commands as root.