Giter Club home page Giter Club logo

emarsoftware's People

Contributors

kaimihata avatar mayacakmak avatar noramorsi avatar patricialvesoliveira avatar samesoul avatar tanya17365 avatar thejollyreaper avatar tonyli1 avatar

Stargazers

 avatar

Watchers

 avatar  avatar

emarsoftware's Issues

Rerunning program on EUP Tool

On the EUP tool, when clicking "run" before it has run through the program entirely, the web-robot responds erratically (e.g., jumps at random points in the program, executing random parts, etc).
Also might be the case when re-running the program for the first time after a while (e.g., when loading the web robot and it's not on the starting screen).

Quick solution is to refresh the robot programming tool page before re-running the program. Not sure if this works all the time.

Robot API: Showing text/buttons incrementally

Letting text appear incrementally within a single screen (e.g., slowly displaying bullet points, rather than showing paragraph all at once). Similarly, letting buttons appear on screen after allotted time.

Likely requires separating text input, perhaps by line breaks (see Define Dimensions of Belly Screen issue).
Would also be nice to have a smooth transition (fade in/out).

Eye Position In Test Robot Fixed to Top Left Corner

When I tried to change the eye position in the WoZ (for Test Robot), the eyeballs shifted to the left corner of the screen and stayed there in both the preview and the rendered robot no matter how many times I refreshed the pages or attempted to change the settings back.
Cause: robot/state/currentEyes gets set to “currentEyes” instead of up/down/whatever in the database
Eyes Rendering Issue When Eye Position Changed

Lock Aspect Ratio

When zooming in and out of the screen the aspect ratio of the robot changes, which makes the careful design behind it unusable. The same happens when running the robot in a phone or computer screen. By locking the aspect ratio, the design is kept intact across the screens.

Color of Face and Belly

Teens reported to like the same color for the Face and Belly screens. Currently, it is not possible to change the color of the belly screen, which we would like to do.

Skip "Save to My programs" Step

I always need to save the robot program to “my programs” even if I just want to run a robot. We will want to skip this and just be able to run a robot without it needing to be added to the programs (or by funding a way for robots to be added to programs automatically when we want to run them).

Running existing programs from backend

It would be useful to have the ability to browse all available programs for a robot and trigger them directly from the robot back-end. We could have a small menu that opens on the back-end belly renderer that lists all the programs, has an icon for each and allows starting the program. The back-end would still communicate with the database to control itself, but it should be doable. Would be useful for running interaction studies and demos.. just turn on the robot and start an interaction w/o having to open something on a different device.

iPad Letter Size

The letters in the iPad appear very small when compared to their size in the belly editing. The letters in the iPad should appear larger.

Voice Recording Input

Add functionality of voice recording as an input modality. This can be added in the belly screen editing.

Cleaning Robots

  • Check which robots are working fine
  • Delete robots that aren't working
  • Cluster robots with the same programming

Conventions
This convention should be added in the description of each program

  • Old robots: v1, v2, etc

  • New robots: [month/day/year] - [name of the robot program] - [programmer name, first and last] . E.g., "02/04/21 - Notice Five Things - Patricia Oliveira"

Apostrophe Not Working In Belly Input

While creating a new screen for the ACT Yes And No micro-interaction, any text including and after the apostrophe in the belly screen was not accepted and disappeared after typing it. Most other common special characters work, however.

Robot programming: Execution/debug feedback

Currently when a program runs, there is no info on the robot programming tool about what is happening, would be useful to add:

  • Make the program read-only while program executes, go back to editable when execution is done.
  • Highlight (e.g. green) the line of the code that is currently executing
  • Have a debug message window that displays stuff being printed onto the console during execution. For this we can go through the API implementation and replace console.log statements with something like displayDebugInfo and that function can both add the info the interface debug window as well as still print it in the console.

Consecutive robot.speak() calls

When inserting consecutive "robot.speak();" functions, only performs first one.
Need to add robot.sleep(); functions in between speaking functions and guess the duration, or aggregate them into a single robot.speak(); function (but that removes natural pauses in between sentences).
Screen Shot 2020-07-16 at 12 12 02 PM

Either make this explicit in specifications or fix

Robot API: Animations / Images on Belly Screen

Can we have the option to display animations (e.g., a timer) or images (e.g., a heart) on the Belly Screen? Tanya and I designed the Yes / No micro-intervention considering this possibility, so it would be great if we have it!

Head Tilt

Can we develop a head tilt in the face preview? :)

Progress Bar

Add a Progress Bar to each of the activities. The progress bar can be presented on the top-bottom of the belly screen and should show progress in two colors (no need for using numbers or percentages). The progress is a sum of the number of screens the user has progressed taking into account the total number of screens per activity.

Progress Bar example: https://growth.design/case-studies/instagram-monetization/

Navigation panel program

How to incorporate the different micro-interventions (defined as different robots in the program) in the Navigation Panel?

Belly Screen Dimenions for Drag and Drop

  • Reduce the size of the Belly screens that can be drag and drop
  • Expand the window on the right side of the "Belly Screens" to match the mention of the "Global Settings"

Robot setup: Upload New Sounds

Currently, there is no option to upload new library sounds. Would this be possible? E.g., to add the sound library from EMOTE project that was already tested.

Undefined Robot

While attempting to edit the belly screens for the test robot, the frontend link reset itself or refreshed and instead showed me an empty robot. When I went back to the main screen, it did not show me the list of robots and only displayed 'undefined'.
Patricia faced the same issue when she opened the robot setup links at her end.
Screenshot (99)

Robot API: Running a program from within a program

This feature has become necessary for the ACT/DBT study (see Issue #33). How we might be able to do it:

  • First we need a new function in the robot API, say runProgram(programName) or runProgram(programID); the API can list all available programs for a given robot (just like it lists the faces and belly screens).

  • Currently a program is run as below. The code is actually not parsed, but some function calls can be easily modified (e.g. robot.sleep). If the program itself had some lines that are robot.runProgram and that function basically went and found the program from the database, turned it into codeText and used eval() to run it, like below, would that just work? We should try this as a first idea.

async function runProgram(robotId, programId) {
  let codeText = robotPrograms[robotId][programId].program;
  codeText = codeText.replace(/robot.sleep/g, "await robot.sleep");
  eval("(async () => {" + codeText + "})();");
}

Belly Screens Editor

In the Belly Editor Menu, the beely screens are not in order of when they should appear. We always need to use a function to call the belly screens in a given order. What we will want is for the order of the belly screens in the belly editor menu to reflect the order they appear in the interaction (to avoid calling them).

Robot frontend: Belly editor

The belly rendering has gotten quite complex and messy, it needs some refactoring and rethinking.

First, I propose to:

  • Make the belly editor a separate tool from the robot setup, just like the face editor
  • Currently all belly screens are editable at once, make it so only one is being edited at a time, give more space
  • In the screens created for ACT/DBT I noticed belly screen text formatting using html code in the text (I believe by Samuel), that's a nice idea but doesn't allow for visualizing. Instead formatting options should be separated into simple menus, or some sort of separate CSS editor.
  • One thing we should still figure out though is if belly screens are associated with users (like faces) or robots (like it is currently). This might be a separate issue to address both for belly screens and faces.

Once we separate the belly editor we can start adding other belly rendering features mentioned in other issues (text input, grouping/layout, images, icons, custom HTML, etc) both to the backend belly renderer and the new front-end belly editing tool. We also need to:

  • Update the Robot setup tool to access the list of available belly screens, add/remove them from robot, rename them, etc.
  • Add belly editing tool to front end tools menu

Motor Control Sliders

Right now, we have two sliders to control the motors which makes it confusing. Decrease the two sliders for one slider and add a checkbox to alternate between the different control modes.

Admin: Read/write permissions for robots

Currently any user can edit content of a robot. This means someone could spend a lot of time making a robot work and someone else can come and mess it all up unintentionally. To avoid this, I propose:

  • Add a list of admins to each robot on the database; robot->admins, this could be email address and we could require Google sign in. It could also be anonymous uid (though those can change even using the same browser on the same machine). If we do this, maybe more the 'sign in with google' button somewhere else on the front-end tools page.
  • Change the setup tool to only work for users who are admins on a robot.
  • Only admins of a robot should be able to add/remove programs in the programming tool (though I think currently there is no way of removing programs)

Note that non-admins should still be able to view a robot ("read") and control it through WoZ or preexisting programs (i.e. "write" onto robot state/actions, but not content like faces, screens, programs, etc).

Also:

  • Update the Admin tool in the front-end so the master admins (which can only be added through the database UI) who have access to the Admin tool can add/remove robot admins.

Text Input on Belly Screen

Can we build the option to input text on the Belly Screen? This will allow teens to input some thoughts (e.g., essential for the Yes / No micro-intervention).

Robot API: Background music

Having background music play during the duration of the exercise, or for select slides (based on Patrícia's suggestion).

Some ideas for implementing this:

  • startMusic(); and endMusic(); function on the robot programming tool
  • "Select music" option on Robot Setup tool
  • Volume/mute or play/pause button(s) for users

Robot backend: Phone robot

One idea that came up in meetings is to have a version of the virtual robot that renders well on a smart phone. A first attempt at this has been implemented and is part of the back-end tools now:
https://mayacakmak.github.io/emarsoftware/robotbackend/index.html

This should work okay, but it would be good to improve in a few ways:

  • Need to test on different phones, the rendered face/belly screens can get misaligned with the background image, find a better/general way to maintain alignment
  • The belly rendering (belly.js) can be more phone compatible. I made the belly div scrollable so we can make buttons etc larger and let people scroll.
  • Need a higher resolution image of the rounded robot. Replace robotbackend/emar.png
  • We could have alternative background images of different bodies in which EMAR face/belly are rendered

Consistent grouping of robot content

Currently we have three different groupings/arrangements of robot content created by developers:

  • Faces: Stored in two places: user-> faces, robot->faces, Editor is robot independent; it gives access to all faces from all users (read-only), and user-> faces (editable).
  • Belly screens: Stored only on robot->screens, all screens are editable by all users of a robot.
  • Programs: Similar to faces, stored in two places: user->programs, robot->programs. But differently, the editor gives access to all programs on all robots (not all programs from all users).

I propose that we make everything consistent with the model of the programs. To that end we need to:

  • Change the face editor to display faces from robots, perhaps similar to the programming interface where the user can browse robots from within the tool (rather than select which robot before opening the tool).
  • Make a belly editor with a similar model, start copying belly screens under user->screens, make them editable only when they are there.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.