Giter Club home page Giter Club logo

everest-demo's Introduction

OpenSSF Best Practices

EVerest Logo

The primary goal of EVerest is to develop and maintain an open source software stack for EV charging infrastructure. EVerest is developed having modularity and customizability in mind, so it consists of a framework to configure several interchangeable modules which are coupled by MQTT with each other. EVerest will help to speed the adoption to e-mobility by utilizing all the open source advantages for the EV charging world. It will also enable new features for local energy management, PV-integration, and many more.
The EVerest project was initiated by PIONIX GmbH, to help with the electrification of the mobility sector.

A complete documentation can be found here.

Build & Install

Community

Welcome to the EVerest community ๐Ÿ‘‹. See COMMUNITY.md how to get in contact with us.

Contributing

Anyone can contribute to the EVerest project - learn more at CONTRIBUTING.md. All project management related documents incl. our roadmap can be found here.

Governance

EVerest is a project hosted by the LF Energy Foundation. This project's technical charter is located in CHARTER.md and has established it's own processes for managing day-to-day processes in the project at GOVERNANCE.md.

Reporting Issues

To report a problem, you can open an issue in repository against a specific workflow. If the issue is sensitive in nature or a security related issue, please do not report in the issue tracker but instead email [email protected].

Licensing

EVerest and its subprojects are licensed under the Apache License, Version 2.0. See LICENSE for the full license text.

everest-demo's People

Contributors

activeshadow avatar chrisweissmann avatar couryrr-afs avatar drmrd avatar jhoshiko avatar louisg1337 avatar sahabulh avatar shankari avatar siarheivolkau avatar thanaparis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

everest-demo's Issues

Manager crashes due to dependancy issue on M1-Chip machines

The Issue

When attempting to run any of the demos on an M-1 Chip machine, they fail due to a missing manifest within the docker dependancies. Running the MaEVe-based demos results in the following error...

Full MaEVe Failure
~/Documents/everest-demo user$ curl https://raw.githubusercontent.com/everest/everest-demo/main/demo-ac.sh | bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1111  100  1111    0     0   2053      0 --:--:-- --:--:-- --:--:--  2057
[+] Running 1/3
 โœ˜ manager Error                                                           2.3s 
 โ ธ node-red Pulling                                                        2.3s 
 โ ธ mqtt-server Pulling                                                     2.3s 
no matching manifest for linux/arm64/v8 in the manifest list entries

And likewise, running CitrineOS results in the following...

CitrineOS Failures image

This issue was originally found within this issue. For those without access to the internal repository, below is a copy of the findings:

Findings

Issues with Apple Silicone

I've run into some issues running on Apple's M1 Chips. Below are the specs:

  • VM: UTM (Running on MacOS 14.4.1), No VM (tested on MacOS hardware)
  • Operating Systems Tested: Ubuntu 23.10, MacOS 14.4.1
  • CPU: Apple M1 Pro
  • 4 GB Ram
  • Docker Version: Docker 26.1.2 (not Docker Desktop)

And, the subsequent error when attempting to spin-up any of the demos:

no matching manifest for linux/arm64/v8 in the manifest entries

From what I understand, this is an issue with one of our docker dependencies. When attempting a hack-y fix described here and composing locally, we get a bit further -- the script fails with the following:

Attaching to manager-1, mqtt-server-1, node-red-1
mqtt-server-1    | exec /docker-entrypoint.sh: exec format error
mqtt-server-1 exited with code 1
manager-1         | exec /bin/sh: exec format error
node-red-1        | exec /usr/local/bin/npm: exec format error
manager-1 exited with code 1
node-red-1 exited with code 1

This is what makes me believe the issue is with our dependencies, not just the platform declaration (as described in the linked thread). Many of the posts I've read have suggested this is an issue with MySQL (link), which doesn't seem relevant. These comments do have a common thread, however, suggesting that one of these dependancies is missing the linux/arm64/v8 manifest.

Looking at EVerest's packages (link), I do see linux/amd64 listed within the OS / Arch tab... perhaps the packages in the.yaml need to be linked differently? I'll keep reading up on it, will add updates on this / the PyTest issue as I find more details!

Likewise, running a Virtual Machine hosted on an M-1 Chip results in a similar failure (Tested on UTM). Complete emulation of Linux Machines results in a successful launch of the demos, but performance is hindered to the point that this not a viable workaround.

Potential Solutions

As suggested within the original thread, I believe that this issue stems from one of the packages missing an ARM64 dependency. I recall during a discussion with the team that this could be traced back to the internal ghcr for everest, though I cannot find a papertrail for those thoughts. Ideally, this fix should be as simple as finding and updating the correct package within either the ghcr or docker manifests. I will continue to investigate further, and report back what I find!

MRE using EonTI Certs

Opening an issue for discussing the creation of an MRE that use EonTI certificates instead of the current self-signed certificates.

Enable Security Profile 3 (TLS with Client Side Certificates)

In PR #22, we created a one-line demo that allowed us to test end to end charging with OCPP 2.0.1. However it only supports Basic Auth

Screenshot 2024-03-10 at 11 21 37 AM

This issue will track the changes required to change it to support Security Profile 3 (2 in MaEVe since it starts with 0), using a client certificate for authentication.

It will temporarily use a forked version of MaEVe that has hardcoded certificates from an adversarial PKI testing event. Eventually, we will want to have the demo use an open CA and non-proprietary certificates, but make it easy to configure so that testers can easily use proprietary certificates or implementations.

@jhoshiko @sahabulh @CRR-SNL for visibility

Getting caught exception "stoi" when trying to deploy everest-demo to Kubernetes

Hi! ๐Ÿ‘‹

I've tried deploying the everest-demo container images to a Kubernetes cluster (specifically Amazon EKS), and although mqtt-server and node-red deployed successfully, manager kept crashing with the following error:

[2024-06-05 13:53:18.883199] [0x00007f730994bb48] [info]    Main manager process exits because of caught exception:
stoi

I wrote the following script to convert the Docker Compose file to a Helm chart for deployment to Kubernetes:

#!/bin/sh

# This script requires the following tools, all of which can be installed with Homebrew:
# - wget
# - kompose
# - kubectl
# - helm
# - kubectx

export EVEREST_MANAGER_CPUS='2.0'
export EVEREST_MANAGER_MEMORY='1536mb'
export TAG='0.0.16'

# Change the following value to the proper Kubernetes context alias in your configuration
K8S_CONTEXT_ALIAS=k8s-tooling-cluster

mkdir -p ./tmp
rm -rf ./tmp/*
cd ./tmp || exit
wget https://raw.githubusercontent.com/EVerest/everest-demo/main/docker-compose.iso15118-dc.yml
kompose -f docker-compose.iso15118-dc.yml convert -c
cd ..
helm lint ./tmp/docker-compose.iso15118-dc || exit
kubectx ${K8S_CONTEXT_ALIAS} || exit
helm upgrade everest ./tmp/docker-compose.iso15118-dc --cleanup-on-fail --create-namespace --description "EVerest demo" --dry-run=client --install --namespace everest || exit
helm upgrade everest ./tmp/docker-compose.iso15118-dc --cleanup-on-fail --create-namespace --description "EVerest demo" --dry-run=server --install --namespace everest || exit
echo "Helm dry-run on server successful. To actually deploy to the tooling cluster, run the following command:"
echo ""
echo "kubectx ${K8S_CONTEXT_ALIAS} && helm upgrade everest ./tmp/docker-compose.iso15118-dc --cleanup-on-fail --create-namespace --description \"EVerest demo\" --install --namespace everest"
echo ""

I ran this script, and then I ran the command that the script printed out in the end and the deployment initially appeared to be successful, until I noticed the pod belonging to the deployment manager constantly restarting and failing and eventually going into CrashLoopBackOff.

A kubectl -n everest logs [pod name] yielded the "stoi" error that I mentioned further above.

When I changed spec.template.spec.containers[0].command in the generated manager-deployment.yaml file as below and then ran the same helm upgrade command again, I managed to get the pod to start successfully, so I could log into it and try some troubleshooting:

    spec:
      containers:
        - command: [ "/bin/sh" ]
          args: [ "-c", "while true; do echo hello; sleep 10;done" ]

When I ran the helm upgrade command again to apply this, the manager pod started successfully, and then I could log into it as follows (note that you need to replace the [manager-pod-name] part, because that changes every time the deployment is updated and a new pod is spun up):

kubectl -n everest exec pod/[manager-pod-name] -it -- /bin/sh

In this console, it was easy to recreate the error:

/ext/source/build/run-scripts/run-sil-dc.sh

I noticed that I could run manager with the --help option just fine (I looked in in run-sil-dc.sh to see how I could run it) :

LD_LIBRARY_PATH=/ext/source/build/dist/lib:$LD_LIBRARY_PATH PATH=/ext/source/build/dist/bin:$PATH manager --help

...But whenever I would try any of the configurations in /ext/source/config, I would get that weird stoi (string to integer conversion?) exception, regardless of whether or not I included the --check option:

LD_LIBRARY_PATH=/ext/source/build/dist/lib:$LD_LIBRARY_PATH PATH=/ext/source/build/dist/bin:$PATH manager --check --conf /ext/source/config/config-sil-dc.yaml

LD_LIBRARY_PATH=/ext/source/build/dist/lib:$LD_LIBRARY_PATH PATH=/ext/source/build/dist/bin:$PATH manager --check --conf /ext/source/config/config-example.yaml

LD_LIBRARY_PATH=/ext/source/build/dist/lib:$LD_LIBRARY_PATH PATH=/ext/source/build/dist/bin:$PATH manager --conf /ext/source/config/config-example.yaml

I took at look at the manager.cpp source code, but it wasn't very clear where exactly the exception was being thrown, because no other hints were being given other than the exception message stoi.

It appears to be happening somewhere in the int boot(...) function, before the splash banner is printed with EVLOG_info.

Strangely enough, when I run the same Docker container image locally, I can't reproduce this issue:

docker run --rm -it --platform linux/amd64 --entrypoint sh ghcr.io/everest/everest-demo/manager:0.0.16
LD_LIBRARY_PATH=/ext/source/build/dist/lib:$LD_LIBRARY_PATH PATH=/ext/source/build/dist/bin:$PATH manager --check --conf /ext/source/config/config-sil-dc.yaml

The result in that case, on an Apple Silicon MacBook, running the Docker container in x86 emulation mode:

2024-06-05 14:21:36.996736 [INFO] manager          ::   ________      __                _
2024-06-05 14:21:37.001297 [INFO] manager          ::  |  ____\ \    / /               | |
2024-06-05 14:21:37.001332 [INFO] manager          ::  | |__   \ \  / /__ _ __ ___  ___| |_
2024-06-05 14:21:37.001609 [INFO] manager          ::  |  __|   \ \/ / _ \ '__/ _ \/ __| __|
2024-06-05 14:21:37.001629 [INFO] manager          ::  | |____   \  /  __/ | |  __/\__ \ |_
2024-06-05 14:21:37.001643 [INFO] manager          ::  |______|   \/ \___|_|  \___||___/\__|
2024-06-05 14:21:37.001659 [INFO] manager          ::
2024-06-05 14:21:37.001689 [INFO] manager          :: Using MQTT broker localhost:1883
2024-06-05 14:21:37.016064 [ERRO] manager         int main(int, char**) :: Main manager process exits because of caught exception:
Syscall pipe2() failed (Invalid argument), exiting

It's also an error, but at least a different one.

I'm a bit at a loss now.

Could you maybe help me getting this deployed to Kubernetes? (So far I've only tried AWS EKS, but I guess I could try this in a local minikube cluster or something too. Let me know if that would help.)

I also noticed that the everest-demo/manager container images are not yet multi-platform, but the test cluster in which I tried to deploy it has nodes running on an x86_64 architecture, so that shouldn't be the problem.

Thank you kindly in advance with helping me getting this deployed to our test cluster for evaluation! ๐Ÿ™

๐Ÿšฉ Enable PnC with OCPP 201

The EVerest team has enabled PnC for OCPP 201
EVerest/everest-core#588

They have also added documentation on how to configure the EvseSecurity module
https://github.com/EVerest/everest-core/blob/main/modules/EvseSecurity/doc.rst
and a simple HOWTO guide
https://github.com/EVerest/EVerest/blob/main/docs/tutorials/how_to_plug_and_charge/index.rst

More details are at:
EVerest/EVerest@03f1039#diff-c18cbe481f4e2f2cc3c9c5992561e64464244edcbcda0c2d08bbb0e0987ef23a

Note that the simple guide uses a simple CSMS, created in the EVerest org, that is based on the MobilityHouse package and the CertificateInstallationResponse from josev.

This issue will track the steps required to enable PnC, first with the toy CSMS and then with a real CSMS such as MaEVe

@jhoshiko @activeshadow @sahabulh for visibility and collaboration

Fix EVerest Testing Environment

When trying to run the automated testing demo, the tests fail to run because pytest is not installed:

mqtt-server-1  | 1710442039: mosquitto version 2.0.10 starting
mqtt-server-1  | 1710442039: Config loaded from /mosquitto/config/mosquitto.conf.
mqtt-server-1  | 1710442039: Opening ipv4 listen socket on port 1883.
mqtt-server-1  | 1710442039: Opening websockets listen socket on port 9001.
mqtt-server-1  | 1710442039: mosquitto version 2.0.10 running
manager-1      | ./run-test.sh: line 3: pytest: not found
manager-1 exited with code 127

However, after installing pytest (specifically pytest-asyncio), the tests still fail because the everest.testing module is missing:

__________________________________________________________________ ERROR collecting core_tests/startup_tests.py ___________________________________________________________________ImportError while importing test module '/ext/source/tests/core_tests/startup_tests.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib/python3.10/importlib/__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
core_tests/startup_tests.py:12: in <module>
    from everest.testing.core_utils.everest_core import EverestCore, Requirement
E   ModuleNotFoundError: No module named 'everest.testing'
============================================================================= short test summary info =============================================================================ERROR core_tests/startup_tests.py
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!================================================================================ 1 error in 0.14s =================================================================================

I can see that everestpy and edm_tool are both installed, and I can import everest.

Refactor docker images in everest-demo

In the context of the general refactoring of docker images in EVerest,
it would be good to refactor the docker images in the everest-demo, too.

Proposed Changes

  • Use Reusable Workflow/Custom Github Action from everest-ci to deploy everest-demo related docker images
  • Adapt versioning strategy to use the same versioning strategy as everest-ci
    Currently there is a .env file in the everest-demo repository that defines the version of the docker images. In everest-ci the docker image versions are synced to the version of the everest-ci repository.
    We should unify the versioning strategy across the EVerest repositories.
  • The steve docker image is very similar to the one in everest-dev-environment (Currently only visible in feature branch, moved from everest-utils). It would be good to merge these two images into one.
  • Reuse the ghcr.io/everest/everest-dev-environment/mosquitto image in everest-demo instead of building it again.
  • Use only docker images from ghcr.io/everest in everest-demo

Further Thoughts

Versioning Strategy

There are pro and cons for both strategies. The current strategy is more flexible, as it allows to release a new version of the docker images without releasing a new version of the everest-demo repository. This can be useful if only the docker images are changed, but not other things in the everest-demo repository. On the other hand, the current strategy can lead to confusion, as the version of the docker images is not directly visible in the everest-demo repository.
Using the strategy from everest-ci means to release a new version of each docker image whenever a new version of the everest-demo repository is released. This can be more consistent, but can also lead to unnecessary version bumps of the docker images.

Steve Docker Image

I believe the two images can be merged easily, since the only difference I could find is the additional README.md in everest-demo. The question is where to put the merged Dockerfile. I would suggest to put it in everest-dev-environment and use it from there in everest-demo. This would also make it easier to maintain the image, as it would only be in one place.

๐Ÿš‘ Images being built by the CI/CD are being tagged with the branch instead of the version tag

Describe the bug
@drmrd can you please take a look at this?
From https://github.com/EVerest/everest-demo/actions/runs/7109613037 (for example), we have

EVerest/EVerest#12 pushing manifest for ghcr.io/everest/everest-demo/manager:main@sha256:89799fb3302309c5337ab40c85af7e573d65ff2decda6315c2c1eb644c722681
EVerest/EVerest#12 pushing manifest for ghcr.io/everest/everest-demo/manager:main@sha256:89799fb3302309c5337ab40c85af7e573d65ff2decda6315c2c1eb644c722681 1.5s done
EVerest/EVerest#12 DONE 2.2s

To Reproduce
Check the tags of the published packages

Expected behavior
Packages should be tagged with 0.0.3 or 0.0.4
Instead, packages are tagged with main

Screenshots
Screen Shot 2023-12-05 at 7 29 59 PM

๐Ÿ”Œ ๐Ÿ• Experiment with an end to end demo of departure time

Let's try to build on #44 to include departure time.

EDIT: Simplified scenarios with a single SASchedule. More complex schedules are tracked in #68

High level scenario using AC L2:

  • ISO 15118-2 with EIM
    • plug in the car with no departure time, eamount = 60,000 Wh
    • receive SASchedule with pmax = 22,080W, max current = 32A, duration = 1 day
  • ISO 15118-2 with PnC
    • plug in the car with no departure time, eamount = 60,000 Wh
    • receive SASchedule with pmax = 22,080W, max current = 32A, duration = 1 day
  • Grid control only: grid -> CSMS sends a charging profile to the station with:
    • 10A or 6900W starting an hour before the start of the demo and running for 4 hours
    • plug in the car with no departure time, eamount = 60,000 Wh
    • receive SASchedule with pmax = 6900W, max current = 10A, duration = 1 day
  • Car control only: restart EVSE so that prior charge schedule is deleted
    • plug in the car with departure time of 6 hours (21600 secs), eamount = 60,000 Wh
    • receive SASchedule with pmax = 10,000W, max current = 15A, duration = 6 hours
  • Car + grid control: grid -> CSMS sends a charging profile to the station with:
    • 10A or 6900 W starting an hour before the start of the demo and running for 4 hours
    • plug in the car with departure time of 3 hours (10800 secs), eamount = 60,000 Wh
    • receive SASchedule with pmax = 6900W, max current = 10A, duration = 3 hours
    • note that the car does not recieve its requested energy because of the grid control. It only receives 20,700
  • Real world example: grid -> CSMS sends a charging profile to the station with:
    • 20A or 13,800 W for the next 12 hours (43200 secs, overnight, no sun is shining)
    • EV plugs in with a requested departure time of 16 hours (57600 secs) and requested energy of 85kWh [1]
    • receive SASchedule with pmax = 5312, max current = 7A, duration = 16 hours pmax (float) = 5312.5 (ceil: 5313), max_current = 7.7 (ceil 8), 15.998 hours (ceil 16).

[1] This is a real-life example from our recent trip to Arcata. We got to the hotel at around 8pm with a Tesla that was almost empty. We wanted to walk to a concert the next morning (9am - 11am) and then leave at noon rather than deal with parking at the concert venue. The hotel had validated parking in the public lot, but the charger was not smart, and there was no communication between the network and the hotel - the validated permit was a piece of paper to put on the windshield. So we got a notification at around 6am indicating that we would start getting charged for parking since the charging was complete. In this case, the fast charging was actually bad - my husband had to run down to the lot and move the car right after he woke up.

Citrine Demo Not Working After Recent Changes

After the #39 changes were merged into main, the Citrine demo using SP1 no longer works. All the software loads properly and runs, but once we get into the SIL simulation, EVerest is unable to connect to the websocket. I even tried connecting EVerest to Citrine's 8081 port which is unsecured, yet it still is unable to connect.

To Reproduce

Hardware
MacBook - macOS Ventura: 13.6.7 - Intel Core i7

โ‡๏ธ Smart charging demo at CharIN

In a few short weeks, we are going to try to pull together a demo for CharIN, to be held in Cleveland from June 11-14. Given that we have been working on smart charging recently, the goal is to try and demo basic, end to end smart charging.

So this flow diagram:

FlowDiagramAsSequence-1

MQTTAbstractionImpl::connectBroker Failed to open socket: Resource temporarily unavailable

The subject happens on ArchLinux during run of:
curl https://raw.githubusercontent.com/everest/everest-demo/main/demo-ac.sh | bash

Log excerpt:

manager-1      | 2024-03-05 13:41:26.061457 [ERRO] manager         bool Everest::MQTTAbstractionImpl::connectBroker(const char*, const char*) :: Failed to open socket: Resource temporarily unavailable
manager-1      | 2024-03-05 13:41:26.061514 [ERRO] manager         int boot(const boost::program_options::variables_map&) :: Cannot connect to MQTT broker at mqtt-server:1883
node-red-1     | 5 Mar 13:41:26 - [info] Dashboard version 3.6.2 started at /ui
node-red-1     | 5 Mar 13:41:26 - [info] Settings file  : /data/settings.js
node-red-1     | 5 Mar 13:41:26 - [info] Context store  : 'default' [module=memory]
node-red-1     | 5 Mar 13:41:26 - [info] User directory : /data
node-red-1     | 5 Mar 13:41:26 - [warn] Projects disabled : editorTheme.projects.enabled=false
node-red-1     | 5 Mar 13:41:26 - [info] Flows file     : /config/config-sil-flow.json
node-red-1     | 5 Mar 13:41:26 - [warn] 
node-red-1     | 
node-red-1     | ---------------------------------------------------------------------
node-red-1     | Your flow credentials file is encrypted using a system-generated key.
node-red-1     | 
node-red-1     | If the system-generated key is lost for any reason, your credentials
node-red-1     | file will not be recoverable, you will have to delete it and re-enter
node-red-1     | your credentials.
node-red-1     | 
node-red-1     | You should set your own key using the 'credentialSecret' option in
node-red-1     | your settings file. Node-RED will then re-encrypt your credentials
node-red-1     | file using your chosen key the next time you deploy a change.
node-red-1     | ---------------------------------------------------------------------
node-red-1     | 
node-red-1     | 5 Mar 13:41:26 - [warn] Encrypted credentials not found
node-red-1     | 5 Mar 13:41:26 - [info] Server now running at http://127.0.0.1:1880/
node-red-1     | 5 Mar 13:41:26 - [info] Starting flows
node-red-1     | 5 Mar 13:41:26 - [info] Started flows
node-red-1     | 5 Mar 13:41:26 - [info] [mqtt-broker:fc8686af.48d178] Connection failed to broker: mqtt://mqtt-server:1883
manager-1 exited with code 1
node-red-1     | 5 Mar 13:41:41 - [info] [mqtt-broker:fc8686af.48d178] Connection failed to broker: mqtt://mqtt-server:1883

docker version

 Version:           25.0.2
 API version:       1.44
 Go version:        go1.21.6
 Git commit:        29cf629222
 Built:             Thu Feb  1 10:50:44 2024
 OS/Arch:           linux/amd64
 Context:           default

Server:
 Engine:
  Version:          25.0.2
  API version:      1.44 (minimum version 1.24)
  Go version:       go1.21.6
  Git commit:       fce6e0ca9b
  Built:            Thu Feb  1 10:50:44 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.7.13
  GitCommit:        7c3aca7a610df76212171d200ca3811ff6096eb8.m
 runc:
  Version:          1.1.12
  GitCommit:        
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

docker-compose --version
Docker Compose version 2.24.6

๐Ÿ”Œ ๐Ÿ• ๐Ÿ—’๏ธ ๐Ÿ““ More complex departure times and interaction with complex composite schedules

In #64 we experimented with getting an end to end demo of departure time to work. We hoped to cover the following scenarios. However the hookup between the OCPP side and the ISO side is clunky and only appears to support setting a single limit at a time (and thus, returning a single SASchedule at a time). So we are scaling down initial demo, and creating this new issue for plumbing through multiple schedule periods and potentially, multiple SASchedules sent in ISO 15118-2

High level scenario using AC L2:

  • Grid -> CSMS sends two separate charging profiles to the station
    • 10A or 2.4 kW starting an hour before the start of the demo and running for 4 hours
    • 20A or 4.8 kW starting 5 hours later and running for another 4 hours
    • 32A or 7.680 kW starting 10 hours later and running for another 12 hours
  • EV plugs in with a requested departure time of 7 hours and requested energy of 16kWh (~ 2.4 * 7)
    • EVSE returns a single SA schedule with 10A for 8 hours
  • EV plugs in with a requested departure time of 6 hours and requested energy of 21kWh (~ 2.4 * 3 + 4.8 * 3)
    • EVSE returns two SA schedules (10 A for 3 hours and 15 A for 3 hours)
  • EV plugs in with a requested departure time of 16 hours and requested energy of 85kWh [1]
    • EVSE returns three SA schedules (10 A for 3 hours, 20 A for 4 hours, 28 A for 9 hours)
      • (0.24 * 10) * 3 + (0.24 * 20) * 4 + (0.24 * 28) * 9 = 86.88
      • 3 + 4 + 9 = 16
  • (bonus, if we have time) EV plugs in with a requested departure time of 4 hours and requested energy of 21kWh
    • EVSE returns zero SA schedules (is this what should happen? not super clear from the spec)

[1] This is a real-life example from our recent trip to Arcata. We got to the hotel at around 8pm with a Tesla that was almost empty. We wanted to walk to a concert the next morning (9am - 11am) and then leave at noon rather than deal with parking at the concert venue. The hotel had validated parking in the public lot, but the charger was not smart, and there was no communication between the network and the hotel - the validated permit was a piece of paper to put on the windshield. So we got a notification at around 6am indicating that we would start getting charged for parking since the charging was complete. In this case, the fast charging was actually bad - my husband had to run down to the lot and move the car right after he woke up.

Patch Cleanup After CharIN Demos

During CharIN, multiple on the spot patches had to be made to get the demos working, found in this PR. The goal is to now slowly fix everything so that these patches are no longer needed.

Discussions started about debugging our MaEVe implementation of setChargingProfile so that it can work without a patch, found here. This thread is now the continuation of all of the above.

Software In Loop Simulation Script Not Found

After the newest changes to everest-demo/main last night (found here), running the demo-iso15118-2-ac-plus-ocpp.sh command now returns this error:

Starting software in the loop simulation

sh: can't open '/workspace/build/run-scripts/run-sil-ocpp201-pnc.sh': No such file or directory

Build and run custom version of EVerest on the uMWC

The EVerest project has open hardware as well https://everest.github.io/nightly/hardware/pionix_belay_box.html which is available as a kit from Pionix. Pionix also sells the uMWC (https://shop.pionix.com/products/umwc-micro-mega-watt-charger). This is is a non-open device in a housing that shares some hardware with the Belay Box although it has a different power module that is limited to 1W output.

In this issue, we will track the steps to run a custom build of EVerest on the uMWC so that we can perform HIL testing.

@faizanmir21 The instructions are here:
https://everest.github.io/nightly/hardware/pionix_belay_box.html#developing-with-everest-and-belaybox

They are for the Belay Box, but I'm hoping that they will apply to the uMWC as well. If not, we can ask the community for help.

My suggested steps are:

  1. check everest-dev.service and verify that it starts the dev build from /mnt/user_data/opt/everest
  2. Install a dev build from the latest stable release (2024.3.0) https://github.com/EVerest/everest-core/releases/tag/2024.3.0
    • We already have everest builds in the docker containers. So you can run the manager docker container and use it as the base to cross-compile. To get a shell, you can use:
    • one of the demo scripts and then docker exec -it ....manager /bin/bash OR
    • docker run -it ghcr.io/everest/everest-demo/manager:0.0.16 /bin/bash
  3. Then rsync it over and try to boot up!

@drmrd @couryrr-afs @wjmp for visibility

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.