Giter Club home page Giter Club logo

xtf's Introduction

XTF

XTF is a framework designed to ease up aspects of testing in the OpenShift environment.

XTF is an open source project and managed in best-effort mode by anyone who is interested in or is using this project. There is no dedicated maintainer and there is not set time in which any given XTF issues will be fixed.

XTF Maven repository

The XTF repository moved to the JBoss public repository just recently (early 2021) and was previously hosted on Bintray. Please take care of this and update your projects accordingly in order to depend on and use the latest XTF versions, i.e. adjust your XTF repository pom.xml configuration by adding (if not there already) the following snippet:

...
<repository>
  <id>jboss-releases-repository</id>
  <name>JBoss Releases Repository</name>
  <url>https://repository.jboss.org/nexus/content/groups/public/</url>
  <snapshots>
     <enabled>false</enabled>
  </snapshots>
  <releases>
     <enabled>true</enabled>
  </releases>
</repository>

<repository>
  <id>jboss-snapshots-repository</id>
  <name>JBoss Snapshots Repository</name>
  <url>https://repository.jboss.org/nexus/content/repositories/snapshots</url>
  <snapshots>
     <enabled>true</enabled>
  </snapshots>
  <releases>
     <enabled>false</enabled>
  </releases>
</repository>
...

Modules

Core

Core concepts of XTF framework used by other modules.

Configuration

While the framework itself doesn't require any configuration, it can ease up some repetitive settings in tests. Setup of XTF can be done in 4 ways with priority from top to down:

  • System properties
  • Environment variables
  • test.properties file in root of the project designed to contain user specific setup. You can use -Dxtf.test_properties.path property to specify the location for the desired user specific setup.
  • global-test.properties file in root of the project designed to contain a shared setup. You can use -Dxtf.global_test_properties.path property to specify the location for the desired user specific setup.

The mapping between system properties and environment variables is done by lower casing environment variable, replacing _ with . and adding xtf. before the result.

Example: OPENSHIFT_MASTER_URL is mapped to xtf.openshift.master.url.

OpenShift

OpenShift class is the entry point for communicating with OpenShift. It extends OpenShiftNamespaceClient from Fabric8 client as it is meant to be used within one namespace, where tests are executed.

The OpenShift class extends the upstream version with several shortcuts, e.g. using DeploymentConfig name only for retrieving any Pod or its log. This is useful in test cases where we know that we have only one pod created by DeploymentConfigs or that we don't care which one will we get. The class itself also provides access to OpenShift specific Waiters.

Configuration:

Take a look at the OpenShiftConfig class to see possible configurations. Enabling some of them will allow you to instantiate as OpenShift openShift = OpenShifts.master().

Pull Secrets

There's a convenient method OpenShift::setupPullSecret() to set up pull secrets as recommended by OpenShift documentation. The property xtf.openshift.pullsecret is checked in the ProjectCreator listener and in BuildManager to populate projects with pull secret if provided. The pull secret is expected to be provided in Json format.

Single registry

{"auths":{"registry.redhat.io":{"auth":"<TOKEN>"}}}

Multiple registries

{"auths":{"registry.redhat.io":{"auth":"<TOKEN>"},"quay.io":{"auth":"<TOKEN>"}}}

Waiters

Waiter is a concept for conditional waiting. It retrieves an object or state in the specified interval and checks for the specified success and failure conditions. When one of them is met, the waiter will quit. If neither is met within the timeout, then an exception is thrown.

XTF provides two different implementations, (SimpleWaiter and SupplierWaiter) and several preconfigured instances. All the default parameters of preconfigured Waiters are overrideable.

OpenShifts.master().waiters().isDcReady("my-deployment").waitFor();

Https.doesUrlReturnsOK("http://example.com").timeOut(TimeUnit.MINUTES, 10).waitFor();

BuildManager

BuildManager caches test builds in one namespace so that they can be reused. After the first time a specified ManagedBuild succeeds,
only the reference is returned, but the build will be already present.

BuildManager bm = new BuildManagers.get();
ManagedBuild mb = new BinaryBuild("my-builder-image", Paths.resolve("/resources/apps/my-test-app"));
ManagedBuildReference = bm.deploy(mb);

bm.hasBuildCompleted().waitFor();

Image

Wrapper class for URL specified images. Its purpose is to parse them or turn them into ImageStream objects.

Specifying Maven

In some images Maven needs to be activated, for example on RHEL7 via script /opt/rh/rh-maven35/enable. This can be controlled by properties.

  • xtf.maven.activation_script - path to Maven activation script. Defaults to /opt/rh/rh-maven35/enable if not set.

Not setting these options might result in faulty results from ImageContent#mavenVersion().

Specifying images

Every image that is set in global-test.properties using xtf.{foo}.image can be accessed by using Images.get(foo).

Products

Allows to hold some basic and custom properties related to tested product image in properties file. Example considering maintenance of one image version:

xtf.foo.image=image.url/user/repo:tag
xtf.foo.version=1.0.3

XTF also considers the possibility of maintenance of several versions. In this case add a "subId" to your properties and specify xtf.foo.subid to activate particular properties (in your 'pom.xml' profile for example). Most of the properties can be shared for a given product. While image properties will override version properties.

Example considering maintenance of two image versions:

xtf.foo.image                               // Will override versions image property
xtf.foo.templates.repo=git.repo.url         // Will be used as default if not specified in version property
xtf.foo.v1.image=image.url/user/repoV1:tag1
xtf.foo.v1.version=1.0.3
xtf.foo.v2.image=image.url/user/repoV2:tag2
xtf.foo.v2.version=1.0.3

Retrieving an instance with this metadata: Produts.resolve("product");

Using TestCaseContext to get name of currently running test case

If junit.jupiter.extensions.autodetection.enabled=true then JUnit 5 extension cz.xtf.core.context.TestCaseContextExtension is automatically registered. It sets name of currently running test case into TestCaseContext before @BeforeAll of test case is called.

Following code then can be used to retrieve the name of currently running test case in:

String testCase = TestCaseContext.getRunningTestCaseName()

Automatic creation of namespace(s)

XTF allows to automatically manage creation of testing namespace which is defined by xtf.openshift.namespace property. This namespace is created before any test case is started.

This feature requires to have XTF JUnit5 cz.xtf.junit5.listeners.ProjectCreator extension enabled. This can be done by adding cz.xtf.junit5.listeners.ProjectCreator line into files:

src/test/resources/META-INF/services/org.junit.jupiter.api.extension.Extension
src/test/resources/META-INF/services/org.junit.platform.launcher.PostDiscoveryFilter
src/test/resources/META-INF/services/org.junit.platform.launcher.TestExecutionListener

Run test cases in separate namespaces using xtf.openshift.namespace.per.testcase property

You can enable running each test case in separate namespace by setting xtf.openshift.namespace.per.testcase=true.

Namespace names follow pattern: "${xtf.openshift.namespace}-TestCaseName". For example for xtf.openshift.namespace=testnamespace and test case org.test.SmokeTest it will be testnamespace-SmokeTest.

You can limit the length of created namespace by xtf.openshift.namespace.per.testcase.length.limit property. By default it's 25 chars. If limit is breached then test case name in namespace name is hashed to hold the limit. So namespace name would like testnamespace-s623jd6332

Warning - Limitations

When enabling this feature in your project, you may need to replace OpenShiftConfig.getNamespace() with NamespaceManager.getNamespace(). Check method's javadoc to understand difference.

In case that you're using this feature, consuming test suite must follow those rules to avoid unexpected behaviour when using cz.xtf.core.openshift.OpenShift instances:

  • Do not create static cz.xtf.core.openshift.OpenShift variable like: public static final OpenShift openshift = Openshifts.master() on class level. The reason is that during initialization of static instances the test case and corresponsing namespace is not known. To avoid unexpected behaviour RuntimeException is thrown, when programmer breaks this rule.
  • Similarly as above do not create cz.xtf.core.openshift.OpenShift variables in static blocks or do not initialize other static variables which creates cz.xtf.core.openshift.OpenShift instance.

Service Logs Streaming (SLS)

This feature allows for you to stream the services output while the test is running; this way you can see immediately what is happening inside the cluster. This is of great help when debugging provisioning, specifically on Cloud environments, which instead would require for you to access your Pods.

Kubernetes/OpenShift implementation

The SLS OpenShift platform implementation relies upon the following fabric8 Kubernetes Client API features:

The expected behavior is to stream the output of all the containers that are started or terminated in the selected namespaces.

Usage

The SLS feature can be configured and enabled either via annotations or via properties. This behavior is provided by the ServiceLogsStreamingRunner JUnit 5 extension. There are two different ways for enabling the SLS functionality, which are summarized in the following sections, please refer to the JUnit 5 submodule documentation in order to read about the extension implementation details.

The @ServiceLogsStreaming annotation (Developer perspective)

Usage is as simple as annotating your test with @ServiceLogsStreaming e.g.:

@ServiceLogsStreaming
@Slf4j
public class HelloWorldTest {
  // ...
}
The xtf.log.streaming.enabled and xtf.log.streaming.config property (Developer/Automation perspective)

You can enable the SLS feature by setting the xtf.log.streaming.enabled property so that it would apply to all the test classes being executed.

Conversely, if the above property is not set, you can set the xtf.log.streaming.config property in order to provide multiple SLS configurations which could map to different test classes.

The xtf.log.streaming.config property value is expected to be a comma (,) separated list of configuration items, each one formatted as a semi-colon (;) separated list of name and value pairs for the above mentioned attributes, where the name/value separator is expected to be the equals char (=). A single configuration item represents a valid source of configuration for a single SLS activation and exposes the following information:

  • target: a regular expression which allows for the testing engine to check whether the current context test class name matches the Service Logs Streaming configuration - REQUIRED

  • filter: a string representing a regex to filter out the resources which the Service Logs Streaming activation should be monitoring - OPTIONAL

  • output: the base path where the log stream files - one for each executed test class - will be created. OPTIONAL, if not assigned, logs will be streamed to System.out. When assigned, XTF will attempt to create the path in case it doesn't exist and default to System.out should any error occur.

Usage examples

Given what above, enabling SLS for all test classes is possible by executing the following command:

mvn clean install -Dxtf.log.streaming.enabled=true

Similarly, in order to enable the feature for all test classes whose name is ending with "Test" should be as simple as executing something similar to the following command:

mvn clean install -Dxtf.log.streaming.config="target=.*Test"

which would differ in case the logs should be streamed to an output file:

mvn clean install -Dxtf.log.streaming.config="target=.*Test;output=/home/myuser/sls-logs"

or in case you'd want to provide multiple configuration items to map different test classes, e.g.:

mvn clean install -Dxtf.log.streaming.config="target=TestClassA,target=TestClassB.*;output=/home/myuser/sls-logs;filter=.*my-app.*"

JUnit5

JUnit5 module provides a number of extensions and listeners designed to easy up OpenShift images test management. See JUnit5 for more information.

Helm

You can use HelmBinary.execute() method to run Helm against your cluster. Following Helm properties are introduced:

Property name Type Description Default value
xtf.helm.clients.url String URL from which version specified by xtf.helm.client.version https://mirror.openshift.com/pub/openshift-v4/clients/helm
xtf.helm.client.version String Version of the Helm client to be downloaded (from http://[xtf.clients.url]/[xtf.client.version) latest
xtf.helm.binary.path String Path to existing Helm client binary. If absent, it will be downloaded using combination of xtf.helm.clients.url and xtf.helm.client.version parameters

Releasing XTF

Have a look to the release documentation to learn about the process that defines how to release XTF to the community.

xtf's People

Contributors

avano avatar crumby avatar dependabot[bot] avatar dsimansk avatar fmongiar avatar istraka avatar jbliznak avatar jstourac avatar ladok8 avatar liborfuka avatar llowinge avatar lukasinko avatar maschmid avatar mcarlett avatar mjurc avatar mnovak1 avatar mocenas avatar osmman avatar pkremens avatar radtriste avatar simkam avatar smongiar avatar spriadka avatar stanleykaleta avatar sutaakar avatar tomason avatar tommaso-borgato avatar tremes avatar yegormaksymchuk avatar zroubalik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xtf's Issues

Openshift.getEvents() fails

Update to fabric 4.10.1 in PR360 seems to break getting events from OCP.
Code openShift.getEvents() ends with:
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://api.perf2.xpaas:8443/apis/events.k8s.io/v1beta1/namespaces/mocenas/events. Message: events.events.k8s.io is forbidden: User "xpaasqe" cannot list events.events.k8s.io in the namespace "mocenas": no RBAC policy matched. Received status: Status(apiVersion=v1, code=403, details=StatusDetails(causes=[], group=events.k8s.io, kind=events, name=null, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=events.events.k8s.io is forbidden: User "xpaasqe" cannot list events.events.k8s.io in the namespace "mocenas": no RBAC policy matched, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Forbidden, status=Failure, additionalProperties={}).
Tested against OCP 3.11 and 4.4.4

Improve logging for errors during images building

If TS fails in image building, I see currently something like this:

[2019-11-01 09:50:24,695] WARN - Build HA_SERVLET_COUNTER failed!

But I expect ERROR log level and more informations about this, for example:

  • no image found in manifest list for architecture s390x, OS linux
  • x509: certificate signed by unknown authority
  • etc.

Document XTF properties (and equivavlent environment variables)

Names and usage of XTF properties are spread across the framework and they're mostly undocumented. Purpose of this issue is to provide list XTF properties and document its usage in README.MD or any other suitable place.

Maybe better option would be to create single java class with all property names and their documentation and all tooling would reference this class so the list would be updated automatically.

Found classed with static xtf properies:
JUnitConfig
BuildManagerConfig
OpenShiftConfig
XTFConfig
WaitingConfig

then there are dynamic properties parsed in:
Product
Image

Allow to remove `xtf.bm.namespace` after test suite

In our automation we use different xtf.bm.namespace namespaces to avoid race conflicts among Jenkins job runs. However this has negative side effect that those namespaces are not deleted after test suite.

Goal of this issue is to add post suite hook which would delete xtf.bm.namespace namespace.

I think xtf.junit.clean_openshift should not be re-used for this purpose as it would break current behaviour that it's keeping xtf.bm.namespace namepsace. New property like xtf.bm.clean_namespace_after_suite should be added.

[bug] Openshift waiter "hasBuildCompleted" with buildConfigName is failing if build is not yet existing ...

I came to the case where I create the build via the command line "oc start-build ..." and then try to wait for the build to be completed, using this method in OpenshiftWaiter class:

public Waiter hasBuildCompleted(String buildConfigName) {
	return hasBuildCompleted(openShift.getLatestBuild(buildConfigName));
}

This is calling:

public Waiter hasBuildCompleted(Build build) {
	Supplier<String> supplier = () -> openShift.getBuild(build.getMetadata().getName()).getStatus().getPhase();
	String reason = "Waiting for completion of build " + build.getMetadata().getName();

	return new SupplierWaiter<>(supplier, "Complete"::equals, "Failed"::equals, TimeUnit.MINUTES, 10, reason).logPoint(Waiter.LogPoint.BOTH).interval(5_000);
}

There I can a NullPointerException because there is no latest build existing and thus, the build sent to this method is null.

I say we shall wait for the latest build to exist (or check it is the corect latest build ...)

OpenShift.clean() does not clean custom resources

I mean we are testing operator which introduce resource api WildFlyServer.wildfly.org. Of course xtf has no idea of such resource.

Now what are options to resolve this? Provide method, such caller can specify additional resources he wants to remove?

Or take another approach similar to oc --loglevel=10 delete all --all. Which seems first take available resource apis and then for each tries to remove resources. (but I have no idea how that can be implemented)

Allow users to use KEEP_LABEL on all resouces subjected by `listRemovableResources`

The is no way to prevent Openshift.clean() from deleting these:

		removables.addAll(getUserSecrets());
		removables.addAll(getUserServiceAccounts());
		removables.addAll(getUserRoleBindings());

but it should be doable, e.g.:

	List<HasMetadata> listRemovableResources() {
...
		removables.addAll(getUserSecrets().stream().filter(withoutKeepLabel()).collect(Collectors.toList()));
		removables.addAll(getUserServiceAccounts().stream().filter(withoutKeepLabel()).collect(Collectors.toList()));
		removables.addAll(getUserRoleBindings().stream().filter(withoutKeepLabel()).collect(Collectors.toList()));
		...

		return removables;
	}

	private Predicate<HasMetadata> withoutKeepLabel() {
		return (hasMetadata) -> {
			if (hasMetadata.getMetadata().getLabels() != null) {
				return !(hasMetadata.getMetadata().getLabels().containsKey(KEEP_LABEL));
			} else {
				return true;
			}
		};
	}

Responsibility for fixing XTF issues

Currently XTF is supported in best effort mode without anyone directly responsible for the tool and resolving issues. As there is number of teams dependent on this tool and their testing depends on it, this situation could be improved.

Purpose of this issue is to discuss this topic and suggest better approach.

registry.redhat.io support doesn't work

This https://github.com/xtf-cz/xtf/blob/master/utilities/src/main/java/cz/xtf/manipulation/ProjectHandler.java#L94 doesn't work in classic scenario - Project "abcd" is created, secret in the project "abcd" is created + some service accounts editions. Then importing the image from registry.redhat.io still doesn't work, as the secret should be placed in the same namespace where image streams sit - in "openshift" namespace (AFAIK - see FUSEDOC-2882).
I think that the correct flow should be Project "abcd" created, Secret placed in "openshift" namespace, image streams creation (i've tested it and it worked for me).

Document XTF's APIs

Pass the XTF APIs and document anything that's not documented, for exampleApplicationBuilder, BuildManager, ManagedBuild and lots of others.

When documenting, focus on interoperability with other components. E.g. explain how ApplicationBuilder works with the other builders. How does it work with ManagedBuild?

Evaluate possibility of on-demand XTF releases

XTF is used by number of teams with different time frames for test development and testing. In case that new XTF functionality(or modification) is required for test development and testing, there is long process to get into new release. As a workaround teams rather wrap XTF classes or add possibly useful tooling into their product test suites.

Purpose of this issue is to evaluate a way how to provide XTF releases on demand.

Prerequisites/Requirements(WIP):

  • XTF must have stable test suite which will be part of standard PR review (#325)
  • Script which automatically releases XTF (tag + push to maven repo)

Problem when getting logs of a pod with many containers - Suggestion for API improvement

Trying to access the logs from a pod with many containers, I got this error:

io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://##SERVER##/api/v1/namespaces/##NAMESPACE##/pods/##POD_NAME##/log?pretty=false. Message: a container name must be specified for pod ##POD_NAME##, choose one of: [##CONTAINER1##, ##CONTAINER2##]. Received status: Status(apiVersion=v1, code=400, details=null, kind=Status, message=a container name must be specified for pod ##POD_NAME##, choose one of: [##CONTAINER1##, ##CONTAINER2##], metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=BadRequest, status=Failure, additionalProperties={}).
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:478)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:417)
at io.fabric8.kubernetes.client.dsl.internal.PodOperationsImpl.doGetLog(PodOperationsImpl.java:150)
at io.fabric8.kubernetes.client.dsl.internal.PodOperationsImpl.getLog(PodOperationsImpl.java:159)
at io.fabric8.kubernetes.client.dsl.internal.PodOperationsImpl.getLog(PodOperationsImpl.java:61)
at cz.xtf.core.openshift.OpenShift.getPodLog(OpenShift.java:285)

The API does not provide yet a way to get logs from a specific container, or all the container logs from a pod.

The workaround is to use the inner code of getPodLog method and do:
openshift.pods().withName(name).inContainer(##CONTAINER_NAME##).getLog() to retrieve exact logs of a container.

API improvements (Backward compatible):

  • getPodLog => Return the logs from first container ? Would it be a problem ?
  • getPodAllContainersLog => Return all logs as a collection of String
  • getPodContainerLog(String name, String containerName) => Return specific log of a specific container

Container names can anyway be retrieved from the Pod class (via pod.getSpec().getContainers() and container.getName()).

NB: This does also apply for observePodLog methods.

If needed, I can provide a PR for it. Just let me know.

Add an example how to use xtf

Add an example how to use xtf.

  • sample module in the repo
  • sample application under xtf-cz organization

And mention it in README

Document master/admin user usage and minimal set rights required to run XTF

XTF is using master and admin users to create/update/delete various resources on Openshift. Purpose of this issue is to document minimal set of required roles/role-bindings.
This should also provide information which operations require admin privileges so it's possible write tests which can run on Openshift cluster without cluster-admin rights.

Supported features in first major

Before first major will happen we should decide which utilities will be supported, which marked as experimental and which should be either deleted or deprecated. This issues is meant to open discussion. All utilities that will go with first major release should be revisited.

Suggested classes to be supported in first major:

  • OpenShiftUtil
  • ImageRegistry
  • WaitingUtils
  • LogChecker
  • BuildManager
  • GitUtils
  • HttpClient

Suggested classes to be deleted:

  • YamlDeployer

Also question is what to do ApplicationBuilder and everything that goes around it. It should be definitely revisited, question is whether we want to support it in the form it is now.

Logic for finding OC client does not work on OCP 4.1

  1. Seems location for OCP 4.1 clients changed to
    https://mirror.openshift.com/pub/openshift-v4/clients/oc. Xtf use hardcoded url [1]

  2. Seems OCP 4.1 url for finding out version changed from /version/openshift to /version . Xtf use hardcoded [2]

Xtf should somehow deal with that. Either have different logic for OCP 3.x and OCP 4.x or just externalize these parameters with -Dxtf.* properties.

curl -k https://api.eap-qe-mnovak2-ocp41.eap-qe-mnovak2-ocp41.fw.rhcloud.com:6443/version
{
  "major": "1",
  "minor": "13+",
  "gitVersion": "v1.13.4+81fc896",
  "gitCommit": "81fc896",
  "gitTreeState": "clean",
  "buildDate": "2019-04-21T23:18:54Z",
  "goVersion": "go1.11.5",
  "compiler": "gc",
  "platform": "linux/amd64"
}

[1]

private static final String CLIENTS_URL = "https://mirror.openshift.com/pub/openshift-v3/clients/";

[2]
String content = Https.httpsGetContent(OpenShiftConfig.url() + "/version/openshift");

Provide RELEASE.md and CONTRIBUTING.md

As xtf framework became community effort. We need to provide guidance how to contribute and release new version of xtf framework. So basically anyone can create new release on demand.

@maschmid wdyt?

Edit - suggestions what should be documented - feel free to update it:

  • How to build and release XTF
    • Tagging and branching strategy
  • Structure of the framework
  • Project ownership / roles guidelines
  • Module naming convention and structure
  • Code convention/style (maven checkstyle/formatter plugin)
    • No imports with wildcards (for example import java.lang.*)
    • Provide javadoc to public API

Provide support for parallel test and test case execution

JUnit 5 allows to runs test and test cases in parallel. However XTF classes are not ready to be invoked from multiple threads and framework does not allow to use different namespaces for test cases to avoid conflicts.

Parallel test/test case execution becomes to be in urgent need to speed up long test suite runs.

Deduplicate system properties

Properties like tokens, users, passwords, etc. are often duplicated, for example:

xtf.openshift.token
xtf.openshift.master.token
xtf.openshift.admin.token

same combinations are here for usernames and passwords. This is related to #334

Centralize global resource creation and deletion

Currently it's unclear what are the necessary rights for creating Openshift resources to run TS with XTF framework.

For example resources which are created as part of a certain action (i.e. an image-puller role binding when using a ManagedBuild) should not be hidden and look like a side effect of that action, but should be located on a single place which would provide single source of required right and what resources is XTF creating.

Support multiple instances of OpenshiftUtil

Currently OpenshiftUtil is implemented as a singleton. OpenshiftUtil is namespace aware, this is specified in OpenShiftContext. When user wants to change a namespace of OpenshiftUtil then he must set appropriate OpenShiftContext. During this context setup the underlying OpenShift client is closed.

This is the major limitation for running parallel tests in separated projects(namespaces). In such case there is a need to configure several projects and their resources at once. This is impossible with current singleton approach.

A possibility to address this limitation would be to allow using more instances of OpenshiftUtil class at once. It can be achieved for example by allowing to create specific OpenshiftUtil instance for any project.

Improve ImageRegistry class and associated properties

Currently there are several problems with ImageRegistry class, executing test suites agains images and consistency between images marking in properties file.

Problems:

  • Images and streams are marked inconsistently eg.:xtf.image, org.image, some differentiate between versions, some not.
  • Class methods itself return only image url, if user wants to get associated data, he needs to convert image.

Suggestions:

  • Properties consistency:
    • All images would start with xtf.image
    • xtf.image.'id' property would be used to set main image
    • xtf.image.'id'.'version' would be used to track sub images and streams related to id
    • xtf.image.'id'.'version'.properties.'property' would be use to track various properties related to stream
    • xtf.images.'id' would be used to track all relative images for integration testing in case that image with id is not main tested image
    • version should match '.version.' regex from image
  • API suggestions:
    • imageId() would return instance of Image class
    • imageId() would initialize image either with main image (xtf.image.id) and corresponding version would be found between sub images for properties inheritance or first found would be used. Third option would be to specify xtf.image.'id'.version property to choose witch version should be used.

Advantages:

  • Images would be retrievable thorough common methods with id and version
  • Suite should be executable without knowledge of image version, yet executable with version specified on request (specific profiles, property setup)
  • No need for commenting and un-commenting image properties
  • ImageStream name with tags would be acquirable from Image class (those simplified class using ImageStream annotations)
  • VersionRegistry class could be probably removed.

Example:
xtf.image.tomcat=imageUrl
xtf.image.tomcat.tomcat7=imageUrl
xtf.image.tomcat.tomcat7.properties.version=tomcat7
xtf.image.tomcat.tomcat7.properties.stream_name=tomcat7-stream
xtf.image.tomcat.tomcat7.properties.stream_tags=1.0,1.1

This is halfway solution to knowledge_base file.

@maschmid suggestions?

Release to maven central

Make XTF releasable to maven central so there is no need for extra repository configuration in the project.

Use @API annotation for marking supported and unsupported features

In order to be able to distinguish between supported and unsupported features we should use @API annotation on test classes. This way we would be able to say which features can user expect to be supported with backward compatibility support, which are experimental or which are internal and not meant to be use by other parties then within project itself.

See http://junit.org/junit5/docs/current/user-guide/#api-evolution for example.

Creating binding system:image-puller to role ClusterRole can fail with parallel runs in BuildManager

In case that there are started multiple runs of test suite (in parallel) there is race in:
https://github.com/xtf-cz/xtf/blob/master/core/src/main/java/cz/xtf/core/bm/BuildManager.java#L24

which results in:

io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://api.eapqe-005-msb0.dynamic.xpaas:6443/apis/rbac.authorization.k8s.io/v1/namespaces/73-master/rolebindings. Message: rolebindings.rbac.authorization.k8s.io "system:image-puller" already exists. Received status: Status(apiVersion=v1, code=409, details=StatusDetails(causes=[], group=rbac.authorization.k8s.io, kind=rolebindings, name=system:image-puller, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=rolebindings.rbac.authorization.k8s.io "system:image-puller" already exists, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=AlreadyExists, status=Failure, additionalProperties={}).
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:568)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:507)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:471)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:430)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:251)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:802)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:322)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:318)
	at cz.xtf.core.openshift.OpenShift.createRoleBinding(OpenShift.java:821)
	at cz.xtf.core.openshift.OpenShift.getOrCreateRoleBinding(OpenShift.java:923)
	at cz.xtf.core.openshift.OpenShift.addRoleToGroup(OpenShift.java:900)
	at cz.xtf.core.bm.BuildManager.<init>(BuildManager.java:24)
	at cz.xtf.core.bm.BuildManagers.get(BuildManagers.java:23)
	at cz.xtf.junit5.listeners.ManagedBuildPrebuilder.testPlanExecutionStarted(ManagedBuildPrebuilder.java:70)
	at org.junit.platform.launcher.core.TestExecutionListenerRegistry$CompositeTestExecutionListener.lambda$testPlanExecutionStarted$6(TestExecutionListenerRegistry.java:97)
	at java.util.ArrayList.forEach(ArrayList.java:1257)
	at org.junit.platform.launcher.core.TestExecutionListenerRegistry.notifyTestExecutionListeners(TestExecutionListenerRegistry.java:59)
	at org.junit.platform.launcher.core.TestExecutionListenerRegistry.access$100(TestExecutionListenerRegistry.java:28)
	at org.junit.platform.launcher.core.TestExecutionListenerRegistry$CompositeTestExecutionListener.testPlanExecutionStarted(TestExecutionListenerRegistry.java:97)
	at org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:183)
	at org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:202)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:181)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:150)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)

Note that happens if there is used the same xtf.bm.namespace=xtf-builds namespace across the runs and when xtf.bm.namespace is created for the first time.

Solution can be to ignore AlreadyExists exception.

Race condition between multiple build creations

When multiple testsuites are containing the same @UsesFuseBuild (eg. JDBC_KARAF) then this error can happen

com.redhat.xpaas.OthersFuseTestSuite  Time elapsed: 21.747 sec  <<< ERROR!
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://api.foo39.dos.fuse-qe.eng.rdu2.redhat.com:8443/apis/image.openshift.io/v1/namespaces/xpaasqe-builds/imagestreams. Message: imagestreams.image.openshift.io "camel-jdbc-fuse-karaf" already exists. Received status: Status(apiVersion=v1, code=409, details=StatusDetails(causes=[], group=image.openshift.io, kind=imagestreams, name=camel-jdbc-fuse-karaf, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=imagestreams.image.openshift.io "camel-jdbc-fuse-karaf" already exists, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=AlreadyExists, status=Failure, additionalProperties={}).
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:470)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:409)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:379)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:226)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:773)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:356)
	at cz.xtf.openshift.OpenshiftUtil.lambda$createImageStream$9(OpenshiftUtil.java:294)
	at cz.xtf.openshift.OpenshiftUtil.withDefaultUser(OpenshiftUtil.java:177)
	at cz.xtf.openshift.OpenshiftUtil.createImageStream(OpenshiftUtil.java:293)
	at cz.xtf.build.BuildProcess.deployResources(BuildProcess.java:70)
	at cz.xtf.build.BuildProcess.deployBuildFromGit(BuildProcess.java:63)
	at cz.xtf.build.BuildProcess.deployBuildFromGit(BuildProcess.java:56)
	at cz.xtf.build.PathGitBuildProcess.deployBuild(PathGitBuildProcess.java:34)
	at cz.xtf.build.BuildManagerV2.deployBuild(BuildManagerV2.java:66)
	at cz.xtf.build.BuildManagerV2.lambda$deployBuilds$0(BuildManagerV2.java:86)
	at java.lang.Iterable.forEach(Iterable.java:75)
	at cz.xtf.build.BuildManagerV2.deployBuilds(BuildManagerV2.java:86)
	at cz.xtf.junit.XTFTestSuite.deployBuilds(XTFTestSuite.java:227)
	at cz.xtf.junit.XTFTestSuite.beforeSuite(XTFTestSuite.java:145)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

More intelligent waiters

Add waiter ability to wait for more complex events than pod up and pod down + implementation for the more common use cases.

Add waiter ability to fail fast, once a certain erroneous state has been detected (idea: lambdas for detection of error states, so it's configurable).

Relax CRD clean in case CRD is not installed

https://github.com/xtf-cz/xtf/blob/master/core/src/main/java/cz/xtf/core/openshift/OpenShift.java#L979

	public Waiter clean() {
	for (CustomResourceDefinitionContextProvider crdContextProvider : OpenShift.getCRDContextProviders()) {
			customResource(crdContextProvider.getContext()).delete(getNamespace());
	}

We install the operator CRD during the test execution, however the cleanup blows in case that CRD in not installed on cluster yet (e.g. first run on a clean cluster). We'd need relax this part a little bit so it doesn't fail in case that CRD is not installed yet.

0.14-SNAPSHOT is generating WaiterException: CleaningProject

We are missing way to specify deletion policy in kubernetes client [1].
(And probably with 0.13 default becomes orphan policy??)

Anyway, currently we are experiencing *-deploy pods not beeing deleted because they have finalizer: orphan and ownerReference filled to parent object (I think it was ReplicaSet) in same time.

So either

  1. OpenShift.clean() does not clean parent objects properly. Should we return to cascading(true) ?
  2. Or there is some bug in kubernetes GC and ownerReference is not cleared properly from child objects

Shouldnt we apply cleanFinalizers workaround [2] again to cover this scenario?

[1] fabric8io/kubernetes-client#1614
[2] caa00a4

Discuss responsibility for PR reviews

Purpose of this issue is to define responsibility for PR reviews.

Suggestion is to specify set of people (across teams) who would rotate in PR reviews. Outcome of this issue should be schedule who and when (for example a day of week) will do PR reviews.

Required value: resource rules must supply at least one api group

After upgrade to 0.13-SNAPSHOT

From code

appBuilder.role(PODS_LISTING)
	.resources("pods", "pods/log").verbs("get", "list");

appBuilder.buildApplication(openshift).deploy();

I was getting

[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 109.959 s <<< FAILURE! - in com.redhat.xpaas.eap.xa.PsqlXARecoveryTest
[ERROR] testScaleDownToZeroWithSplit  Time elapsed: 106.363 s  <<< ERROR!
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://master.all-in-one-033.dynamic.xpaas:8443/apis/rbac.authorization.k8s.io/v1/namespaces/mchoma/roles. Message: Role.rbac.authorization.k8s.io "pods-listing" is invalid: rules[0].apiGroups: Required value: resource rules must supply at least one api group. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=rules[0].apiGroups, message=Required value: resource rules must supply at least one api group, reason=FieldValueRequired, additionalProperties={})], group=rbac.authorization.k8s.io, kind=Role, name=pods-listing, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Role.rbac.authorization.k8s.io "pods-listing" is invalid: rules[0].apiGroups: Required value: resource rules must supply at least one api group, metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}).
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:503)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:442)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:406)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:365)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:234)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:735)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:325)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:321)
	at cz.xtf.builder.OpenShiftApplication.lambda$createResources$4(OpenShiftApplication.java:110)
	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
	at java.util.LinkedList$LLSpliterator.forEachRemaining(LinkedList.java:1235)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
	at cz.xtf.builder.OpenShiftApplication.createResources(OpenShiftApplication.java:110)
	at cz.xtf.builder.OpenShiftApplication.deploy(OpenShiftApplication.java:71)
	at com.redhat.xpaas.eap.xa.load.AbstractXARecoveryLoadTest.buildAndDeployApplication(AbstractXARecoveryLoadTest.java:179)
	at com.redhat.xpaas.eap.xa.load.AbstractSQLXARecoveryLoadTest.buildAndDeployApplication(AbstractSQLXARecoveryLoadTest.java:95)
	at com.redhat.xpaas.eap.xa.load.AbstractPostgreSQLXARecoveryLoadTest.deploy(AbstractPostgreSQLXARecoveryLoadTest.java:338)
	at com.redhat.xpaas.eap.xa.PsqlXARecoveryTest.testScaleDownToZeroWithSplit(PsqlXARecoveryTest.java:73)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:628)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:117)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:184)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:180)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:127)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:135)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:125)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:135)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:123)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:122)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:80)
	at java.util.ArrayList.forEach(ArrayList.java:1257)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:125)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:135)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:123)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:122)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:80)
	at java.util.ArrayList.forEach(ArrayList.java:1257)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:125)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:135)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:123)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:122)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:80)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:220)
	at org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:188)
	at org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:202)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:181)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:150)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)

Openshift.clean() doesn't take KEEP_LABEL into account, regression 0.12 -> 0.13

public class CleanupTest {
            List<ConfigMap> configMapList = Stream.of(
                            new ConfigMapBuilder("keep-label-map").addLabel(OpenShift.KEEP_LABEL, null).build(),
                            new ConfigMapBuilder("null-label-map").build(),
                            new ConfigMapBuilder("arbitrary-label-map").build()
            ).collect(Collectors.toList());
     
            private void createConfigMaps() {
                    for (ConfigMap configMap : configMapList) {
                            OpenShifts.master().createConfigMap(configMap);
                    }
                    new SimpleWaiter(() -> OpenShifts.master().getConfigMaps().size() == 3).waitFor();
            }
     
            private void deleteConfigMaps() {
                    for (ConfigMap configMap : OpenShifts.master().getConfigMaps()) {
                            OpenShifts.master().deleteConfigMap(configMap);
                    }
                    new SimpleWaiter(() -> OpenShifts.master().getConfigMaps().size() == 0).waitFor();
            }
     
            private void deleteConfigMaps(List<ConfigMap> removables) {
                    int expected = OpenShifts.master().getConfigMaps().size() - removables.size();
                    for (ConfigMap configMap : removables) {
                            OpenShifts.master().deleteConfigMap(configMap);
                    }
                    new SimpleWaiter(() -> OpenShifts.master().getConfigMaps().size() == expected).waitFor();
            }
     
            @BeforeEach
            public void setupEnv() {
                    deleteConfigMaps();
                    createConfigMaps();
            }
     
            @Test
            public void openshiftCleanTest() {
                    OpenShifts.master().clean();
                    Assertions.assertEquals(1, OpenShifts.master().getConfigMaps().size());
            }
     
            @Test
            public void customCleanTest() {
                    List<ConfigMap> removables = OpenShifts.master().getConfigMaps().stream()
                                    .filter(withoutKeepLabel())
                                    .collect(Collectors.toList());
                    deleteConfigMaps(removables);
                    Assertions.assertEquals(1, OpenShifts.master().getConfigMaps().size());
            }
     
            private Predicate<HasMetadata> withoutKeepLabel() {
                    return (hasMetadata) -> {
                            if (hasMetadata.getMetadata().getLabels() != null) {
                                    return !(hasMetadata.getMetadata().getLabels().containsKey(OpenShift.KEEP_LABEL));
                            } else {
                                    return true;
                            }
                    };
            }
    }

0.12

mvn clean test -Dtest=CleanupTest -Dxtf.version=0.12

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0

0.13

mvn clean test -Dtest=CleanupTest -Dxtf.version=0.13

[ERROR] Failures: 
[ERROR]   CleanupTest.openshiftCleanTest:59 expected: <1> but was: <0>    
    org.opentest4j.AssertionFailedError:
    Expected :1
    Actual   :0
            at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55)
            at org.junit.jupiter.api.AssertionUtils.failNotEqual(AssertionUtils.java:62)
            at org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:150)
            at org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:145)
            at org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:510)
            at com.redhat.xpaas.eap.operator.junit.listener.CleanupTest.openshiftCleanTest(CleanupTest.java:59)

Duplicite ResourceParsers

./utilities/src/main/java/cz/xtf/openshift/ResourceParsers.java
./core/src/main/java/cz/xtf/core/openshift/helpers/ResourceParsers.java

Modularize xtf

At this point, XTF is created by one project only. In order to simplify number of dependencies and project maintainability is should be divided in several specialized modules.

Suggested modules:

  • Utilities - common group of base utilities targeted to work with OpenShift
  • JUnit 4 - annotations, suites, rules specialized for running tests against OpenShift environment with JUnit 4
  • JUnit 5 - annotations, extensions specialized for running tests against OpenShift environment with JUnit 5
  • Application Model - structures and objects for creating and deploying concrete applications on Openshift

@dsimansk From release point of view, do you think better would be multimodule project. Or several separeted projects at xtf-cz?

@maschmid Do you have any other suggestions?

AmqStandaloneBuilder broken

This commit c19f191 breaks AMQ deployment.
Now it first prepare deploymentConfig with preConfigurePod = true and then with preConfigurePod = false.
But before it was in opaque sequence. So it never added onImageChange (which is now problematic somehow) because with second preConfigurePod=true it was took from the cache and preConfigurePod=true was useless. Now it fails with

io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://api.fis31.dos.fuse-qe.eng.rdu2.redhat.com:8443/apis/apps.openshift.io/v1/namespaces/llowinge/deploymentconfigs. Message: DeploymentConfig "amq" is invalid: spec.triggers[0].imageChangeParams.from.name: Invalid value: "registry.access.redhat.com/jboss-amq-6/amq63-openshift:1.3:latest": may not contain '/'. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.triggers[0].imageChangeParams.from.name, message=Invalid value: "registry.access.redhat.com/jboss-amq-6/amq63-openshift:1.3:latest": may not contain '/', reason=FieldValueInvalid, additionalProperties={})], group=null, kind=DeploymentConfig, name=amq, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=DeploymentConfig "amq" is invalid: spec.triggers[0].imageChangeParams.from.name: Invalid value: "registry.access.redhat.com/jboss-amq-6/amq63-openshift:1.3:latest": may not contain '/', metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}).

NoClassDefFoundError - TrustStrategy

I'm getting with 0.9-SNAPSHOT
java.lang.NoClassDefFoundError: org/apache/http/ssl/TrustStrategy at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at cz.xtf.openshift.OpenShiftBinaryClient.getBinary(OpenShiftBinaryClient.java:208) at cz.xtf.openshift.OpenShiftBinaryClient.<init>(OpenShiftBinaryClient.java:34) at cz.xtf.openshift.OpenShiftBinaryClient.getInstance(OpenShiftBinaryClient.java:40)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.