mayacakmak / emarsoftware Goto Github PK
View Code? Open in Web Editor NEWFront end tools for designing robot faces, setting up custom robot APIs, and controlling a robot (WoZ)
License: BSD 2-Clause "Simplified" License
Front end tools for designing robot faces, setting up custom robot APIs, and controlling a robot (WoZ)
License: BSD 2-Clause "Simplified" License
On the EUP tool, when clicking "run" before it has run through the program entirely, the web-robot responds erratically (e.g., jumps at random points in the program, executing random parts, etc).
Also might be the case when re-running the program for the first time after a while (e.g., when loading the web robot and it's not on the starting screen).
Quick solution is to refresh the robot programming tool page before re-running the program. Not sure if this works all the time.
Letting text appear incrementally within a single screen (e.g., slowly displaying bullet points, rather than showing paragraph all at once). Similarly, letting buttons appear on screen after allotted time.
Likely requires separating text input, perhaps by line breaks (see Define Dimensions of Belly Screen issue).
Would also be nice to have a smooth transition (fade in/out).
When I tried to change the eye position in the WoZ (for Test Robot), the eyeballs shifted to the left corner of the screen and stayed there in both the preview and the rendered robot no matter how many times I refreshed the pages or attempted to change the settings back.
Cause: robot/state/currentEyes gets set to “currentEyes” instead of up/down/whatever in the database
When zooming in and out of the screen the aspect ratio of the robot changes, which makes the careful design behind it unusable. The same happens when running the robot in a phone or computer screen. By locking the aspect ratio, the design is kept intact across the screens.
In addition to images that can be uploaded manually, we can have a list of supported icons. I tried fortawesome and it works well--very simple to add.
After typing some text in a belly screen, sometimes it deletes part of it or doesn't save all of the text.
Teens reported to like the same color for the Face and Belly screens. Currently, it is not possible to change the color of the belly screen, which we would like to do.
I always need to save the robot program to “my programs” even if I just want to run a robot. We will want to skip this and just be able to run a robot without it needing to be added to the programs (or by funding a way for robots to be added to programs automatically when we want to run them).
It would be useful to have the ability to browse all available programs for a robot and trigger them directly from the robot back-end. We could have a small menu that opens on the back-end belly renderer that lists all the programs, has an icon for each and allows starting the program. The back-end would still communicate with the database to control itself, but it should be doable. Would be useful for running interaction studies and demos.. just turn on the robot and start an interaction w/o having to open something on a different device.
The letters in the iPad appear very small when compared to their size in the belly editing. The letters in the iPad should appear larger.
Add functionality of voice recording as an input modality. This can be added in the belly screen editing.
Conventions
This convention should be added in the description of each program
Old robots: v1, v2, etc
New robots: [month/day/year] - [name of the robot program] - [programmer name, first and last] . E.g., "02/04/21 - Notice Five Things - Patricia Oliveira"
While creating a new screen for the ACT Yes And No micro-interaction, any text including and after the apostrophe in the belly screen was not accepted and disappeared after typing it. Most other common special characters work, however.
Currently when a program runs, there is no info on the robot programming tool about what is happening, would be useful to add:
console.log
statements with something like displayDebugInfo
and that function can both add the info the interface debug window as well as still print it in the console.I can’t get the robot to speak the value of a variable (even after converting it to a string value).
When inserting consecutive "robot.speak();" functions, only performs first one.
Need to add robot.sleep(); functions in between speaking functions and guess the duration, or aggregate them into a single robot.speak(); function (but that removes natural pauses in between sentences).
Either make this explicit in specifications or fix
Created by Tanya. Needs revision in the code.
Can we have the option to display animations (e.g., a timer) or images (e.g., a heart) on the Belly Screen? Tanya and I designed the Yes / No micro-intervention considering this possibility, so it would be great if we have it!
Can we develop a head tilt in the face preview? :)
Add a Progress Bar to each of the activities. The progress bar can be presented on the top-bottom of the belly screen and should show progress in two colors (no need for using numbers or percentages). The progress is a sum of the number of screens the user has progressed taking into account the total number of screens per activity.
Progress Bar example: https://growth.design/case-studies/instagram-monetization/
How to incorporate the different micro-interventions (defined as different robots in the program) in the Navigation Panel?
screen.slider is undefined, need to check if it exists before checking screen.slider.isShown
Created by Tanya. Needs revision in the code.
On running a program multiple times, it doesn’t reset to the default despite giving explicit commands to do so (if the robot was set to face 3 by the end of the program then even if the first command of the program is to set it to face 1 it doesn’t)
Currently, there is no option to upload new library sounds. Would this be possible? E.g., to add the sound library from EMOTE project that was already tested.
Add a connection between the touch sensor, actuators (motors), and belly editing.
The robot does not work in a Safari browser.
While attempting to edit the belly screens for the test robot, the frontend link reset itself or refreshed and instead showed me an empty robot. When I went back to the main screen, it did not show me the list of robots and only displayed 'undefined'.
Patricia faced the same issue when she opened the robot setup links at her end.
This feature has become necessary for the ACT/DBT study (see Issue #33). How we might be able to do it:
First we need a new function in the robot API, say runProgram(programName) or runProgram(programID); the API can list all available programs for a given robot (just like it lists the faces and belly screens).
Currently a program is run as below. The code is actually not parsed, but some function calls can be easily modified (e.g. robot.sleep). If the program itself had some lines that are robot.runProgram
and that function basically went and found the program from the database, turned it into codeText
and used eval() to run it, like below, would that just work? We should try this as a first idea.
async function runProgram(robotId, programId) {
let codeText = robotPrograms[robotId][programId].program;
codeText = codeText.replace(/robot.sleep/g, "await robot.sleep");
eval("(async () => {" + codeText + "})();");
}
In the Belly Editor Menu, the beely screens are not in order of when they should appear. We always need to use a function to call the belly screens in a given order. What we will want is for the order of the belly screens in the belly editor menu to reflect the order they appear in the interaction (to avoid calling them).
Search for bugs before study deployment.
The belly rendering has gotten quite complex and messy, it needs some refactoring and rethinking.
First, I propose to:
Once we separate the belly editor we can start adding other belly rendering features mentioned in other issues (text input, grouping/layout, images, icons, custom HTML, etc) both to the backend belly renderer and the new front-end belly editing tool. We also need to:
Unable to input text on face preview.
Right now, we have two sliders to control the motors which makes it confusing. Decrease the two sliders for one slider and add a checkbox to alternate between the different control modes.
We should add another function to the api called setScreenByName or something that will set the current screen to a screen based on name rather than index, so that the order of belly screens can be changed without affecting the programs.
Currently any user can edit content of a robot. This means someone could spend a lot of time making a robot work and someone else can come and mess it all up unintentionally. To avoid this, I propose:
Note that non-admins should still be able to view a robot ("read") and control it through WoZ or preexisting programs (i.e. "write" onto robot state/actions, but not content like faces, screens, programs, etc).
Also:
Can we build the option to input text on the Belly Screen? This will allow teens to input some thoughts (e.g., essential for the Yes / No micro-intervention).
Having background music play during the duration of the exercise, or for select slides (based on Patrícia's suggestion).
Some ideas for implementing this:
One idea that came up in meetings is to have a version of the virtual robot that renders well on a smart phone. A first attempt at this has been implemented and is part of the back-end tools now:
https://mayacakmak.github.io/emarsoftware/robotbackend/index.html
This should work okay, but it would be good to improve in a few ways:
robotbackend/emar.png
Created by Tanya. Needs revision in the code.
Currently we have three different groupings/arrangements of robot content created by developers:
I propose that we make everything consistent with the model of the programs. To that end we need to:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.