Giter Club home page Giter Club logo

enhancements's People

Contributors

alaypatel07 avatar aufi avatar djwhatle avatar djzager avatar eemcmullan avatar eriknelson avatar fabianvf avatar fossabot avatar jaydipgabani avatar jmle avatar jmontleon avatar jortel avatar jwmatthews avatar mansam avatar mguetta1 avatar mundra-ankur avatar pranavgaikwad avatar rromannissen avatar savitharaghunathan avatar shawn-hurley avatar shubham-pampattiwar avatar sjd78 avatar sseago avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

enhancements's Issues

[RFE] Improve Assessment capabilities to support intelligent workflows and customizations

This RFE is tracking work to expand the assessment capabilities in Konveyor to support at a minimum:

  • Customizations - allow end users to customization assessment questions/answers as they choose
  • Intelligence - provide a richer form of controlling workflows inside of an assessment so future questions may be presented based on info from an analysis and/or answers to prior questions

This RFE is not complete, we will come back to update and refine as the fuller scope of needs are identified.

[RFE] Custom Migration Targets

Through interaction with different focus user groups in the field, it has been discovered that configuring custom rules on each analysis run can be cumbersome and doesn't scale well when dealing with third party factories, as it requires them to understand some concepts from the Konveyor domain that could be abstracted away.

It is desired to provide an abstraction layer for these users, hiding away the complexities of custom rules configuration by exposing them as custom migration targets managed by advanced users.

[RFE] Allow architects to review an application without an assessment

Overall feedback for the Assessment module in Konveyor is that it is too opinionated and is not flexible enough to adapt to the landscape of all organizations. Given that fact, some user have their own approach to assessment using their own questionnaires and sessions, but would like to be able to reflect their decisions in terms of the most suitable migration strategy for each application without having to fill in the builtin assessment questionnaire.

Right now Tackle forces users to fill the assessment questionnaire in order to be able to review an application. The request would be that, as an architect, users should be able to review applications and determine their strategy, effort and criticality without having to run an assessment first.

[RFE] Repository crawler for application import

What is the problem?

The CSV import requires some information gathering to be made in advance by the organization embarking in the modernization initiative.

Why is this a problem?

Sometimes the stakeholders promoting the migration initiative don't have the horizontal view of the application portfolio that a holistic approach would require. This leads to the migration team having to figure out a way to retrieve information about the application portfolio that usually revolves around the source code repositories. Having to put together scripting to deal with this can be expensive and slow down the initial stages of the migration initiative.

Proposed solution

Enable application creation in the inventory at scale by automating the exploration of source code repositories to retrieve information about applications. The automation should be able to:

  • Create applications based on flexible mappings between application attributes and the topology of the repository and any grouping abstraction that the enterprise git repository management platform might introduce (groups, projects, organizations...). For example, it should be up to the user to decide if the Organization that a certain repository belongs to in GitHub should be applied as a tag or as the Business Service for the resulting imported application in the inventory.
  • Associate stakeholders to the applications based on the user information that the enterprise git repository management platform contains. Again, this should be flexible and the user should be able to decide at what level that information is retrieved depending on the source platform.
  • Ensure compatibility with the most widespread enterprise git repository management platforms: GitHub, Gitlab and Bitbucket. This should be done through an abstraction layer with mapper implementations for each one of this platforms to ensure that the data model and the user experience once the import is done is decoupled from the peculiarities of each platform. The way information is stored and grouped greatly varies from each one of them as summarized in this article, so each implementation will likely be expensive. This means that an analysis should be made to implement the mapper for the most widely used platform first, and keep iterating from there.

[RFE] Analysis history

What is the problem?

There is no way in Konveyor to browse analysis history to check the evolution of a given application.

Why is this a problem?

We have received feedback from multiple organizations and practitioners on the field about the need to keep track of the evolution of applications as changes are applied by migration teams. Knowing how the associated story points or incidents of an application evolve over time provides a measure of progress that can be consumed individually by application owners and migration teams or aggregated across the entire portfolio for potential senior management roles that might want to have an overview of the overall migration initiative.

Proposed solution

Even though storing all the information associated to issues for each analysis wouldn't be practical and would introduce very high storage requirements, Konveyor should at least store the following data for each analysis:

  • Application
  • Date
  • Sources
  • Targets
  • Total story points
  • Total number of incidents
  • Number of incidents per category
  • Version of the analyzed application if it can be determined (this could be difficult to obtain in binaries, but should be trivial when dealing with source code that uses proper build and dependency management tools.

This information should be made consumable at the application profile via graphs that display the number of incidents or story points over time, allowing the user to choose between the two previous metrics and determine the target period of time (Last 6 months, Last 30 days, Last 7 Days, Last 24 hours...). The user should be able to click on each point in the graph and get information on all of the above fields for the given analysis.

It would be desirable to have additional graphs under the Reports option in the left menu to consume this information in an aggregated way across the entire portfolio, although this could be fleshed out in a different RFE.

[UI] Improve Empty State Messaging for Lack of Completed Application Questionnaires

Issue Description:
The current empty state within the application's completed questionnaires view lacks informative messaging, leaving users unaware of why no completed questionnaires are displayed.

Problem:
The absence of informative messaging in the empty state fails to communicate to users why the completed questionnaires section is empty, leading to confusion or uncertainty.

Expected Behavior:
When users encounter an empty state in the completed questionnaires view, they should be provided with information explaining why no completed questionnaires are currently available.

Suggested Solution:
Revise the empty state messaging to include a clear and concise message informing users about the reason behind the lack of completed questionnaires. For instance, "No completed questionnaires available. To get started, please fill out and submit an application questionnaire."

image

[RFE] Enhanced Assessment Module

What is the problem?

Based on the feedback provided for the assessment module, it has been determined that, although useful, its capabilities don't fully cover the differences that organizations might have between them. The general aspects of application containerization are addressed, but sometimes organizations have some particularities that can be key for assessing the suitability of a given application or application type/archetype. The very fact of assessments only covering application containerization matters has also been deemed insufficient by some users, that would like to have the flexibility of putting together their own questionnaires or expand on what the tool provides out of the box.

Another aspect, affecting UX, is the fact that assessments have a one to one relationship with applications. As large scale application modernization/migration projects are all about classifying application in different application types or archetypes to then come up with suitable migration strategies, it feels cumbersome that the assessment process has been designed to only have one application in sight. Copying assessments is also seen as clunky, and has problems when making changes to assessments at scale, which could be a situation when new information or stakeholders appear in the modernization lead's radar.

Finally, there have been some voices requesting more intelligence in the way questions get asked. For some users, there is a feeling of detachment between the information that gets collected in the application inventory and the assessment questionnaire. The general feedback is that if an application has been tagged in a certain way, the questions that get asked should be aligned to that, and if important information gets surfaced during the assessment, that should revert back to the inventory as well.

Why is this a problem?

The Konveyor user experience should be fully centered in allowing users to manage applications and surface information about them at scale, which is not the case with the current assessment module. Pathfinder was build based on a Red Hat consulting tool with the same name that didn't include the notion of an application inventory, thus the detachment described before. The tool wasn't conceived with scalability in mind, and was more oriented towards providing guidance on the early stages of a modernization/migration project. Nevertheless, the possibility of loading custom question was available in the original tool, but got lost somehow in the refactor that ported it to the Konveyor suite. This has led to users having to hack their way through the Pathfinder API or even the database to load their own questionnaires, which is far from the fully integrated and seamless user experience that Konveyor aims to have. In some extreme cases, users have opted to not use the assessment module at all, which led to the requirement of being able to review an application without running an assessment.

Proposed solutions to consider

A new assessment module needs to be implemented, taking the following requirements as guidelines:

  • The module should be able to import questionnaires using a custom YAML syntax for questionnaire definition.
    • The syntax should support way of skipping questions if a certain tag is present on the application or archetype.
    • The syntax should support a way of defining tags to apply to applications if a certain answer has been provided.
  • The module should be able to export existing questionnaires to YAML as well.
  • The concept of application archetypes needs to be implemented, so both individual applications and archetypes can have an associated assessment.
    • Archetypes should be defined by a group of tags.

Change the behavior of auth, so use of Keycloak is opt-in, disabled by default.

Consider changing the default behavior to make deployment of Keycloak be an explicit "opt-in".

We would change the https://github.com/konveyor/tackle2-operator/blob/main/roles/tackle/defaults/main.yml#L9 to

feature_auth_required: false

This will result in a lighter weight Konveyor installation that allows the end user to have full Administrator access inside of Konveyor, i.e. we will lose the ability to differentiate users who are "Migrator" or "Administrator", anyone can switch between the views and access all functionality.

For those users who want to retain the protection of "Administrator" vs "Migrator" inside of Konveyor, they can explicitly set this value in the CR they create:

feature_auth_required: true

[RFE] Dynamic Reports - Incident: Add some indication of the source code language

When rendering the codeSnip of an Incident in the UI, we would like to use syntax highlighting where possible. If the language is known and we can provide a language property on the Incident (or somewhere on the Issue or File or wherever it makes sense), the UI can have some supported list of languages it knows how to highlight. If the language is missing or unrecognized the UI can fall back to no syntax highlighting.

As part of this we should also determine the list of supported languages for which we need syntax highlighting.

[MIG-521] RFP: Abnormal event reporting pattern to support tracking and troubleshooting reference

During our discussions about direct migration network validation, we'd like to be able to "classify" errors or warnings into specific buckets. So for example, because direct migration network validation isn't a clear cut case of "if this happens, then it's a network problem", we have to fall back on some kind of a heuristic. Once detected, it would be very useful for us to know how many users are actually facing problems satisfying the direct migration requirements, and we'd also like to be able to point them to a troubleshooting document that can help guide them to resolving their problems themselves.

A generic error classification and reporting system would be very useful for that.

The thought is to use something in the spirit of error codes (not exactly error codes), but having a bounded and well defined set of buckets that error/warning conditions can fall into. Ex:

conditions:
  - craneErrorType: DirectNetworkFailure
    message: <some human readable>

Maybe there's a well defined set of craneErrorTypes that we can document. The bounded cardinality would allow us to plug into metrics.

[RFE] - Error Reporting and Normalization

As a user, understanding what high-level errors mean can help you quickly debug and solve issues. While there will always be a need for deep dive debugging, expecting users to search through hundreds of log lines that are not for their application, I believe, is an unrealistic expectation.

We need a cascading set of error/warning information to display to users to give them a complete picture of the unhappy path scenarios.

Full Stop Errors (sometimes referred to as "fatal")

The most apparent set of errors is when one of the following occurs:

  1. The addon image is unable to be run.
  2. The addon code itself fails if there is any validation of the inputs that could cause this failure.
  3. The code the addon calls fails with validation error (rule invalid to be parsed, providers are not configured correctly or fail to start. Timeouts fall into this category as well.

For this set of errors, we want to consider that:

  1. this means the user will get 0 meaningful data from this run.
  2. That we probably want to have some defined types of errors, like but not matching "ImagePullBackoff" vs. the "ContainerFailed" or the "StartUpProbeFailed" like Kube will get you.
  3. The common "heuristic" for this type of error is that the user gets no data, and most likely, something must be fixed by the user in the input to the analysis run.

Fall Through Errors or "Soft Errors"

There is a second set of Errors that can occur. These errors will mostly happen when running the code the addon will call. The following are examples.

The clear case for this is:

  1. The Addon runs successfully
  2. The code the addon calls runs successfully but has "errors" in the output as well as data that complete
  3. The User gets some set of "good" data, but some rules have errored, and there is only a subset of the conclusions you can make from the analysis run.
  4. The rules could fail for a couple of different reasons, the rule is malformed for the capability, the provider fails for that particular condition or instance, the code location's/custom variables are unfindable, and therefore the message may be malformed.

These particular scenarios should be captured and displayed. They basically say that some information may be useful, but you can not say all the data that is returned is correct.

Warnings

We have the concept of warnings here that we should consider and categorize. I see this happening when something like the following occurs.

  1. The task is taking too long
  2. The addon completes with no issues being reported but has no violations to save.

Another concern in this vein is that it would be nice, to have some way to show the skipped and unmatched rules. This information will be vital for a user with " new-custom-rule-x " not being matched. If the only option is assuming this is the case because it is not in the issues screen, it seems like it would be a less desirable UX/UI. We need some way to note this information.

Skipped rules are filtered out by the user in the input. These should fall into the above category of Unmatched.

Because the searching for violations is not, AFAIK, tied to a specific analysis but rather to the latest run, we need to have a way for users to see why maybe specific violations/rules are disappeared. This could be our most significant source of customer cases.

Update to a PostgreSQL container that will be supported longer term

We're currently relying on a PostgreSQL 12 container running on CentOS 7.

PostgreSQL 12's last release will be November 14, 2024.
https://www.postgresql.org/support/versioning/

However, CentOS 7 EOL comes even sooner at June 30, 2024
https://cloud.google.com/compute/docs/eol/centos-eol-guidance
https://wiki.centos.org/About/Product

There are no PostgreSQL 12 containers from SCL org
https://github.com/sclorg/postgresql-container

As far as I can tell that effectively leaves us until June 30, 2024 to vet a newer version, resolve any issues, and work out how to perform an upgrade to the new version properly for existing installs. This would apply to both keycloak and pathfinder DBs.

[RFE] Custom 'profiles' for install configuration 'defaults'

What is the problem?

We are concerned about the User Experience we are delivering with Konveyor in regard to enabling/disabling certain features as "default" with an install.

We currently have features that are implemented and available to an end user, but are intentionally disabled and not rendered in the webUI unless an Administrator explicitly modifes the configuration to enable.

An example is with the new ability to download analysis reports. By default this functionality is disabled AND hidden from the Migrator perspective:
Screenshot 2023-03-30 at 11 27 53 AM

An Administrator can login and change to the Administrator view and go to 'General' configuration to Enable downloads for HTML reports

Screenshot 2023-03-30 at 11 28 04 AM

Now, going back to Migrator view, the Download report: HTML link appears
Screenshot 2023-03-30 at 11 28 17 AM

Why is this a problem?

We are worried that disabling functionality may lead to friction in user experience, but even more than that the pattern we've adopted of disabling and NOT rendering features in the webUI hides the capability, reducing the likelihood a new user will be able to explore and learn on their own without reading documentation.

On the flip side, we recognize there are enterprise production users who will want the default values selected and may want those features not rendered to keep the UI 'clean'.

This appears to be an issue of differing perspectives of intended usage scenarios that are in conflict with each other.
For example we have at least 2 different usage scenarios identified (probably more will emerge):

  • Enterprise production install: Favors 'security' so some features are desired to be disabled
  • Upstream trial exploration: Favors all features being available to help gain a sense of solutions capabilities

Proposed solutions to consider

Introduce a 'profile' to group default values for initial install

Allow grouping default configuration values into a concept of 'profiles'
From the Operator CR we can select which 'profile' we want which will only affect the default values to be seeded, i.e. after Konveyor has been installed we wouldn't intend to watch the 'profile' value and update if a user changed the CR's 'profile' to a different profile value.

[RFE] Download HTML and CSV analysis reports

Tackle doesn't allow downloading the HTML reports that are produced by the analysis module. Furthermore, no CSV output is produced either, while that option is available in the Windup CLI. Feedback from the field is pointing out a need to make both assets downloadable. Nevertheless, both of them contain data (source code and dependencies) that can be potentially sensitive for some organizations, so enabling the download should be optional and disabled by default, with administrators being the only ones authorized to manage that configuration.

Plugins Passthrough Information

Plugins must have a method of requesting extra information to be added to the plugin. The CLI for transformation will also need to be able to expose these as CLI flags

Recommended Targets for Archetypes

Issue Description:
As a Konveyor administrator, I need the ability to set recommended migration targets for the migrator to streamline the migration process.

Details:

  • Develop functionality to configure analysis and set migration targets.
  • Ensure custom cards appear prominently at the top of the list in the Set Targets section.
  • Admins should be able to easily select and prioritize custom migration targets.

[RFE] Tackle UI to auto populate details for an existing analysis

Tackle UI doesn't bring back/pre-populate the details for an analysis that was run previously. The "application analysis" popup always starts from the beginning. For instance, if I have run an analysis, and I just want to re-run the same analysis with some changes (say, I just want to include one more migration target), the MTA web UI doesn't allow to do it. One has to go through all the steps (of uploading/specifying application binary, filling in all required targets, specifying custom rules etc.,.,) again to re-run the analysis.

[RFE] Ease creation/configuration of users from WebUI

As an administrator of Konveyor I would like to be able to create and configure authorized users from the WebUI.

This RFE to consider is based on user feedback from a slack conversation in #konveyor on kubernetes.slack.com (link)

Allow initial install of Konveyor to be seeded with a demo app so a new user can quickly run several workflows

For background I was reading this link and it made me think of how we could 'shorten time to value' for a new user who is installing Konveyor by seeding enough demo configuration that they can install and then run a few given scenarios to get a hands on feel of capabilities.

What if we defaulted the behavior of the operator to:

  • Seed an application in Inventory that highlights analysis capabilities we want to show. Assume this is configurable and can be disabled via a parameter.

In addition, I think it would also be worth considering if we want to default to disabling 'Auth' and make this an opt-in.

[RFE] Cover the OpenTracing to OpenTelemetry migration path

@brunobat from the Quarkus team is working on a comprehensive guide to document the migration from OpenTracing to OpenTelemetry in the context of Quarks 3 applications. @dymurray I think this could be a good candidate for our team to try to produce rules based on documentation about a given migration path. Most if not all of the contents in the guide refer to Quarkus specifics, but I think it would also be interesting to try to figure out if we can extract some generic guidelines that could apply to other runtimes like Spring boot (there is a generic migration guide here), and then provide the runtime specific recommendations based on the source technology of the application.

[RFE] Automated tagging of resources with Windup

One of the key assets in Konveyor is its tagging model that allows classifying applications in multiple dimensions. However, the task of tagging applications remains manual, which might not scale well when dealing with a large portfolio. The current enhancement aims at providing a way for addons to automatically tag applications based on the information they are able to surface. The starting point would be to leverage the technology stack information that Windup is able to collect during an analysis and translate that into tags that get automatically added to an application.

[RFE] Allow usage of custom images behind a protected registry

We would like to ease the usage of credentials for ImagePullSecrets with images our Operator is using. It is important that this supports both vanilla Kubernetes and OpenShift.

See discussion from #konveyor slack with @jmontleon helping:
https://kubernetes.slack.com/archives/CR85S82A2/p1689684174225349?thread_ts=1689597418.341729&cid=CR85S82A2

Highlights from slack:

Hi all, I customized some of the images used by the operator (tackle-ui, tackle-pathfinder and tackle-hub), I pushed them to a private registry which requires authentication. Any suggestion on how to make konveyor authenticate to pull the images? I've tried defining a secret and then added the secret under imagePullSecrets (in the yaml file described at "create the Tackle instance" of the installation guide). But still the POD does not even try to authenticate. Thank you folks

For OpenShift:

get the existing secret, update it with your additional credentials for your private registry, and push back

https://docs.openshift.com/container-platform/4.13/openshift_images/managing_images/using-image-pull-secrets.html#images-update-g[…]age-pull-secrets

For Kubernetes (non-OpenShift)

Similar secret creation, then the pod definition is updated to use the pull secret:
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
As long as the operator is using strategic merge strategy for the deployments (this is the default, so unless it needed to be changed for a reason) you should be able to add this to the deployment definitions. Either way this could/should probably be an RFE for the operator so someone can hand us a secret name and we can update the deployment pod definitions. Shouldn't be too hard. I'm not sure I want to be in the business of managing the secret itself, but that's up for debate.

[RFE] Ensure Warning information is displayed through WebUI for Analysis runs

This issue is to help us track a small refinement to the integration of analyzer-lsp.

We want to be sure that if a user creates a custom target with badly formed rules, or if something unexpected happens during the analysis run we are able to signal to the User via the WebUI that something unexpected occurred, perhaps this would be a "Warning" and point them to a means to debug the problem.

The use case imagined is a user has syntax errors in a custom rule, the analyzer skips that custom rule as it saw something wasn't right. Assume that the analysis runs and completes successfully (as it skipped the problematic rule). We want to be sure we are not swallowing the extra information such as warnings when the user views information from the UI.

We need a way to signal to the User something "odd" occurred that is not at the level of an "error" but it is something they may need to investigate.

[RFE] Dynamic Reports - Incident: Add line numbers to the `codeSnip` string (right-aligned)

When rendering the codeSnip string of an Incident in the UI, we display line numbers from the source file in a gutter to the side of the code snippet. In order to align these numbers correctly and correctly mark the actual offending line, we need to know how much context is provided in the code snippet around the actual line numbered by the Incident's line property. I propose a codeSnipStartLine or similarly-named property, which would be the line number where the codeSnip begins.

[RFE] Add recommended targets to Archetypes

In large scale migration projects, it is often important to provide guidance for developers on what could be the most suitable migration path for a given application based on its taxonomy. Since archetypes have become a first class entity to classify applications in different types or categories, it makes sense to associate those recommendations with them. At a very high level, this enhancement would include the following:

  • Allow architects to associate recommended migration targets to a given archetype, including custom migration targets.
  • When analyzing a single application, the UI would highlight the recommended migration targets somehow, but it would still be up to the user to decide which should be selected.
  • When analyzing multiple applications, recommended targets will only be highlighted if all applications belong to the same archetype.

This enhancement would be especially interesting for the Backstage/Janus integration use case, as it would allow organizations to enforce certain migration paths from their IDP. For that case:

  • Discovery analysis would be automatically run on the application.
  • Based on that discovery, the application would likely be associated with an archetype.
  • Based on the archetype, the user would be presented (maybe only) with the recommended migration paths for analysis.

Crane transform plugin priority

The crane transform CLI command needs a way to indicate plugin priority in the event of patch conflicts returned by multiple plugins.

[RFE] Proper SSL Verification for Pathfinder and Keycloak Postgres

The keycloak instances don't appear to be forcing verify-ca or at least require SSL. The default mode is prefer, which may or may not run with SSL at all.

I don't believe either postgresql instance has SSL enabled and there is no guarantee all containers will run on the same nodes and most CNI providers don't encrypt traffic over the wire between nodes, which leads to increased potential for snooping.

Original issue is at: konveyor/tackle2-hub#240 with more conversation and details.

[RFE] Provide Portuguese as a language

There seems to be interest in having Portuguese as an available language for Konveyor from multiple Brazilian organizations. Konveyor has builtin support for i18n, so it would be just a matter of providing the appropriate language files. @jortel can we identify exactly what would need to be provided by anyone willing to contribute?

Historical Analysis

Historical Analysis of previous runs - Ability to keep limited history (last 3-5 runs) of assessments and analysis (for governance teams)

Figma Design Link

Design note for Dev
In terms of functionality and scalability, review what is done in OCP topology graphs

!image-2023-12-12-06-07-34-998.png!

[Tracking] Janus (backstage.io) Integration

This is intended to be a high level tracking issue to help us organize and find other relevant issues related to integration with [Janus](A Red Hat-sponsored community for building Internal Development Platforms and Plugins with backstage.io).

Janus is:
A Red Hat sponsored community for building Internal Development Platforms and Plugins with backstage.io

Janus Blog: https://janus-idp.io/blog/

Issues:

Update outdated keycloak container upstream

We're currently using keycloak 18.0.2-legacy. This image hasn't been updated in 9 months. It looks like the last legacy version was 19.0.3 which was only update 5 months ago.

If we need to do something so we can get on a current non-legacy release we should do it so we're not using outdated and potentially vulnerable releases of keycloak.

[RFE] Enhanced file imports

CSV has some limitations in its capabilities, and it's not the most widespread spreadsheet format used by organizations worldwide. Even though the contents of a spreadsheet in other formats can be adapted to fit the CSV template that Konveyor uses, the process can be cumbersome especially when dealing with dependencies at scale, since Record Types are not the most practical approach.

To solve this, the proposal would be to include support for Excel (extensions .xlsx and .xls) and OpenDocument Format (extension .ods) spreadsheets. This would be aligned with the most used tools in the market, while easing the declaration of different record types (applications, dependencies and potentially more in the future) through the usage of dedicated tabs.

[RFE] Naming changes

Direct feedback from the field suggests that some names used for UI components and entities are confusing and not descriptive enough:

  • Administrator and Developer perspectives are being confused with the different personas available in Konveyor (Administrator, Architect, Migrator), when they refer to different perspectives depending on the domain or realm of operations to perform. Even though this is aligned with the perspective naming used in OpenShift, it would be better to change it and have it aligned to our domain. The proposal is to replace "Administrator perspective" with "Administration perspective" and "Developer perspective" with "Migration perspective".

  • "Tag types" in the Controls view should be replaced with "Categories", which is a more descriptive name.

[RFE] Detect language & frameworks being used in repositories on import

As a user importing a source code repository to Konveyor I would like to see Konveyor understand the languages and frameworks used in the source code (prior to running a full code Analysis), similar to the kind of view GitHub provides on a repository with a break out of language percentages...plus more metadata to help get a sense of the frameworks being used.

One possibility to consider for achieving this is to explore leveraging 'Alizer':
https://github.com/redhat-developer/alizer which is a component of https://devfile.io/

Related to #122

[RFE] From UI show the targets selected for a completed analysis

As of today I don't see a 'pretty' way from the UI to go back to a complete Analysis and see what targets were selected for that analysis run.

There is away to get to the data from UI by seeing the raw JSON output from the details as below
Screenshot 2023-04-06 at 10 44 19 AM

which shows us
`{"id":2,"createUser":"","updateUser":"","createTime":"2023-04-05T17:36:22.830108304Z","name":"customer legacy.1.windup","addon":"windup","data":{"mode":{"artifact":"","binary":false,"csv":true,"diva":false,"withDeps":true},"output":"/windup/report","rules":{"bundles":[{"id":2,"name":"Containerization"},{"id":6,"name":"Linux"}],"path":"","tags":{"excluded":[]}},"scope":{"packages":{"excluded":[],"included":[]},"withKnown":false},"sources":[],"tagger":{"enabled":true},"targets":["cloud-readiness","linux"]},"application":{"id":1,"name":"customer legacy"},"state":"Succeeded","image":"quay.io/konveyor/tackle2-addon-windup:latest","started":"2023-04-05T17:36:23.196146583Z","terminated":"2023-04-05T18:39:48.469375098Z","report":{"id":2,"createUser":"admin.noauth","updateUser":"admin.noauth","createTime":"2023-04-05T17:37:21.34877703Z","status":"Succeeded","error":"","total":2,"completed":2,"activity":["Fetching application.","[BUCKET] Report deleted:/windup/report duration:165.98252ms.","[CMD] Running: /usr/bin/ssh-agent -a /tmp/agent.1","[CMD] succeeded.","[SSH] Agent started.","[GIT] Cloning: https://github.com/konveyor/example-applications.git","[FILE] Created /working/.gitconfig.","[CMD] Running: /usr/bin/git clone https://github.com/konveyor/example-applications.git /working/source/example-applications","[CMD] succeeded.","[CMD] Running: /usr/bin/git checkout main","[CMD] succeeded.","[MVN] Fetch dependencies.","[CMD] Running: /usr/bin/mvn dependency:copy-dependencies -f /working/source/example-applications/example-1/pom.xml -DoutputDirectory=/working/deps -Dmaven.repo.local=/cache/m2","[CMD] succeeded.","[CMD] Running: /opt/windup --exitCodes --batchMode --output /working/report --input /working/deps --exportSummary --input /working/source/example-applications/example-1 --exportCSV --target cloud-readiness --target linux","[CMD] succeeded.","[BUCKET] Report updated:/windup/report duration:314.268565ms.","[TAG] Tagging Application 1."],"task":2}}

[RFE] Allow the usage of Gradle build files for the Source + Dependencies Analysis mode

The source+dependencies analysis mode doesn't recognize Gradle build files, so it's not able to retrieve the required dependencies. Even though the usage of Gradle is not as widespread as Maven's, it would be good to cover this use case as well. A way to approach it could be to transform the Gradle build file into a POM, and use the logic we already have from there. The Maven Publish Plugin from Gradle seems to be able to do this as explained here.

[RFE] Migration Waves

The Konveyor project aims at providing differential value at each stage of the migration process. So far, most of the focus has been put on the assessment and rationalization stages, offering little value to the planning required to scale out the migration process across the whole application portfolio. When dealing with a large scale portfolio of hundreds or even thousands of applications, it is simply not possible to execute the migration process on a big bang approach addressing the whole portfolio at once. A common way to tackle adoption at scale is to break the portfolio into different waves and execute the adoption effort in an iterative fashion. This enhancement aims to enable Konveyor to define migration waves with the applications in the inventory as a first step towards a more sophisticated approach that could automate calculation of these waves based on different criteria.

This RFE is fleshed out in the Migration Waves Enhancement.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.