Giter Club home page Giter Club logo

keycloak-clustered's Introduction

keycloak-clustered

Keycloak-Clustered extends quay.io/keycloak/keycloak official Keycloak Docker image by adding JDBC_PING discovery protocol

Proof-of-Concepts & Articles

On ivangfr.github.io, I have compiled my Proof-of-Concepts (PoCs) and articles. You can easily search for the technology you are interested in by using the filter. Who knows, perhaps I have already implemented a PoC or written an article about what you are looking for.

Additional Readings

Supported tags and respective Dockerfile links

Author

Ivan Franchin (LinkedIn) (Github) (Medium) (Twitter)

Environment Variables

Please, refer to the official Keycloak documentation at https://www.keycloak.org/server/all-config

How to build locally a development Docker image

Navigate into one of the version folders and run the following command

docker build -t ivanfranchin/keycloak-clustered:latest .

How to check if Keycloak instances are sharing user sessions

  1. Open two different browsers, for instance Chrome and Safari or Chrome and Incognito Chrome.

  2. In one access http://localhost:8080 and, in another, http://localhost:8081

  3. Login with the following credentials

    username: admin
    password: admin
    
  4. Once logged in

  • Click Sessions present on the menu on the left;
  • You should see that admin has two sessions.

Running a Keycloak Cluster using JDBC_PING

Prerequisites

Docker

Using MySQL

Startup

Open a terminal and create a Docker network

docker network create keycloak-net

Run MySQL Docker container

docker run --rm --name mysql -p 3306:3306 \
  -e MYSQL_DATABASE=keycloak \
  -e MYSQL_USER=keycloak \
  -e MYSQL_PASSWORD=password \
  -e MYSQL_ROOT_PASSWORD=root_password \
  --network keycloak-net \
  mysql:8.4.0

Open another terminal and run keycloak-clustered-1 Docker container

docker run --rm --name keycloak-clustered-1 -p 8080:8080 \
  -e KEYCLOAK_ADMIN=admin \
  -e KEYCLOAK_ADMIN_PASSWORD=admin \
  -e KC_DB=mysql \
  -e KC_DB_URL_HOST=mysql \
  -e KC_DB_URL_DATABASE=keycloak \
  -e KC_DB_USERNAME=keycloak \
  -e KC_DB_PASSWORD=password \
  -e KC_LOG_LEVEL=INFO,org.infinispan:DEBUG,org.jgroups:DEBUG \
  -e JGROUPS_DISCOVERY_EXTERNAL_IP=keycloak-clustered-1 \
  --network keycloak-net \
  ivanfranchin/keycloak-clustered:latest start-dev

Finally, open another terminal and run keycloak-clustered-2 Docker container

docker run --rm --name keycloak-clustered-2 -p 8081:8080 \
  -e KEYCLOAK_ADMIN=admin \
  -e KEYCLOAK_ADMIN_PASSWORD=admin \
  -e KC_DB=mysql \
  -e KC_DB_URL_HOST=mysql \
  -e KC_DB_URL_DATABASE=keycloak \
  -e KC_DB_USERNAME=keycloak \
  -e KC_DB_PASSWORD=password \
  -e KC_LOG_LEVEL=INFO,org.infinispan:DEBUG,org.jgroups:DEBUG \
  -e JGROUPS_DISCOVERY_EXTERNAL_IP=keycloak-clustered-2 \
  --network keycloak-net \
  ivanfranchin/keycloak-clustered:latest start-dev

Testing

In order to test it, have a look at How to check if keycloak-clustered instances are sharing user sessions

Check database

Access MySQL monitor terminal inside mysql Docker container

docker exec -it -e MYSQL_PWD=password mysql mysql -ukeycloak --database keycloak

List tables

mysql> show tables;

Select entries in JGROUPSPING table

mysql> SELECT * FROM JGROUPSPING;

To exit MySQL monitor terminal type exit

Teardown

To stop keycloak-clustered-1 and keycloak-clustered-2 Docker containers, press Ctrl+C in their terminals;

To stop mysql Docker container, press Ctrl+\ in its terminal;

To remove Docker network, run in a terminal

docker network rm keycloak-net

Using MariaDB

Open a terminal and create a Docker network

docker network create keycloak-net

Run MariaDB Docker container

docker run --rm --name mariadb -p 3306:3306 \
  -e MARIADB_DATABASE=keycloak \
  -e MARIADB_USER=keycloak \
  -e MARIADB_PASSWORD=password \
  -e MARIADB_ROOT_PASSWORD=root_password \
  --network keycloak-net \
  mariadb:10.11.6

Open another terminal and run keycloak-clustered-1 Docker container

docker run --rm --name keycloak-clustered-1 -p 8080:8080 \
  -e KEYCLOAK_ADMIN=admin \
  -e KEYCLOAK_ADMIN_PASSWORD=admin \
  -e KC_DB=mariadb \
  -e KC_DB_URL_HOST=mariadb \
  -e KC_DB_URL_DATABASE=keycloak \
  -e KC_DB_USERNAME=keycloak \
  -e KC_DB_PASSWORD=password \
  -e KC_LOG_LEVEL=INFO,org.infinispan:DEBUG,org.jgroups:DEBUG \
  -e JGROUPS_DISCOVERY_EXTERNAL_IP=keycloak-clustered-1 \
  --network keycloak-net \
  ivanfranchin/keycloak-clustered:latest start-dev

Finally, open another terminal and run keycloak-clustered-2 Docker container

docker run --rm --name keycloak-clustered-2 -p 8081:8080 \
  -e KEYCLOAK_ADMIN=admin \
  -e KEYCLOAK_ADMIN_PASSWORD=admin \
  -e KC_DB=mariadb \
  -e KC_DB_URL_HOST=mariadb \
  -e KC_DB_URL_DATABASE=keycloak \
  -e KC_DB_USERNAME=keycloak \
  -e KC_DB_PASSWORD=password \
  -e KC_LOG_LEVEL=INFO,org.infinispan:DEBUG,org.jgroups:DEBUG \
  -e JGROUPS_DISCOVERY_EXTERNAL_IP=keycloak-clustered-2 \
  --network keycloak-net \
  ivanfranchin/keycloak-clustered:latest start-dev

Testing

In order to test it, have a look at How to check if keycloak-clustered instances are sharing user sessions

Check database

Access MariaDB monitor terminal inside mariadb Docker container

docker exec -it mariadb mariadb -ukeycloak -ppassword --database keycloak

List tables

MariaDB [keycloak]> show tables;

Select entries in JGROUPSPING table

MariaDB [keycloak]> SELECT * FROM JGROUPSPING;

To exit MariaDB monitor terminal type `exit

Teardown

To stop keycloak-clustered-1 and keycloak-clustered-2 Docker containers, press Ctrl+C in their terminals;

To stop mariadb Docker container, press Ctrl+\ in its terminal;

To remove Docker network, run in a terminal

docker network rm keycloak-net

Using Postgres

Startup

Open a terminal and create a Docker network

docker network create keycloak-net

Run Postgres Docker container

docker run --rm --name postgres -p 5432:5432 \
  -e POSTGRES_DB=keycloak \
  -e POSTGRES_USER=keycloak \
  -e POSTGRES_PASSWORD=password \
  --network keycloak-net \
  postgres:16.1

Open another terminal and run keycloak-clustered-1 Docker container

docker run --rm --name keycloak-clustered-1 -p 8080:8080 \
  -e KEYCLOAK_ADMIN=admin \
  -e KEYCLOAK_ADMIN_PASSWORD=admin \
  -e KC_DB=postgres \
  -e KC_DB_URL_HOST=postgres \
  -e KC_DB_URL_DATABASE=keycloak \
  -e KC_DB_SCHEMA=myschema \
  -e KC_DB_USERNAME=keycloak \
  -e KC_DB_PASSWORD=password \
  -e KC_LOG_LEVEL=INFO,org.infinispan:DEBUG,org.jgroups:DEBUG \
  -e JGROUPS_DISCOVERY_EXTERNAL_IP=keycloak-clustered-1 \
  --network keycloak-net \
  ivanfranchin/keycloak-clustered:latest start-dev

Finally, open another terminal and run keycloak-clustered-2 Docker container

docker run --rm --name keycloak-clustered-2 -p 8081:8080 \
  -e KEYCLOAK_ADMIN=admin \
  -e KEYCLOAK_ADMIN_PASSWORD=admin \
  -e KC_DB=postgres \
  -e KC_DB_URL_HOST=postgres \
  -e KC_DB_URL_DATABASE=keycloak \
  -e KC_DB_SCHEMA=myschema \
  -e KC_DB_USERNAME=keycloak \
  -e KC_DB_PASSWORD=password \
  -e KC_LOG_LEVEL=INFO,org.infinispan:DEBUG,org.jgroups:DEBUG \
  -e JGROUPS_DISCOVERY_EXTERNAL_IP=keycloak-clustered-2 \
  --network keycloak-net \
  ivanfranchin/keycloak-clustered:latest start-dev

Testing

In order to test it, have a look at How to check if keycloak-clustered instances are sharing user sessions

Check database

Access psql terminal inside postgres Docker container

docker exec -it postgres psql -U keycloak

List tables in myschema schema

keycloak=# \dt myschema.*

Select entries in JGROUPSPING table

keycloak=# SELECT * FROM myschema.JGROUPSPING;

To exit psql terminal type \q

Teardown

To stop postgres, keycloak-clustered-1 and keycloak-clustered-2 Docker containers, press Ctrl+C in their terminals;

To remove Docker network, run in a terminal

docker network rm keycloak-net

Using Microsoft SQL Server

Warning: It is not working! See Issues section

Startup

Open a terminal and create a Docker network

docker network create keycloak-net

Run Microsoft SQL Server Docker container

docker run --rm --name mssql -p 1433:1433 \
  -e ACCEPT_EULA=Y \
  -e MSSQL_SA_PASSWORD=my_Password \
  --network keycloak-net \
  mcr.microsoft.com/mssql/server:2022-CU11-ubuntu-22.04

Open another terminal and run the following command to create keycloak database

docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P my_Password -Q 'CREATE DATABASE keycloak'

In a terminal, run keycloak-clustered-1 Docker container

docker run --rm --name keycloak-clustered-1 -p 8080:8080 \
  -e KEYCLOAK_ADMIN=admin \
  -e KEYCLOAK_ADMIN_PASSWORD=admin \
  -e KC_DB=mssql \
  -e KC_DB_URL_HOST=mssql \
  -e KC_DB_URL_DATABASE=keycloak \
  -e KC_DB_URL_PROPERTIES=";trustServerCertificate=false;encrypt=false" \
  -e KC_DB_USERNAME=SA \
  -e KC_DB_PASSWORD=my_Password \
  -e KC_LOG_LEVEL=INFO,org.infinispan:DEBUG,org.jgroups:DEBUG \
  -e JGROUPS_DISCOVERY_EXTERNAL_IP=keycloak-clustered-1 \
  --network keycloak-net \
  ivanfranchin/keycloak-clustered:latest start-dev

Finally, open another terminal and run keycloak-clustered-2 Docker container

docker run --rm --name keycloak-clustered-2 -p 8081:8080 \
  -e KEYCLOAK_ADMIN=admin \
  -e KEYCLOAK_ADMIN_PASSWORD=admin \
  -e KC_DB=mssql \
  -e KC_DB_URL_HOST=mssql \
  -e KC_DB_URL_DATABASE=keycloak \
  -e KC_DB_URL_PROPERTIES=";trustServerCertificate=false;encrypt=false" \
  -e KC_DB_USERNAME=SA \
  -e KC_DB_PASSWORD=my_Password \
  -e KC_LOG_LEVEL=INFO,org.infinispan:DEBUG,org.jgroups:DEBUG \
  -e JGROUPS_DISCOVERY_EXTERNAL_IP=keycloak-clustered-2 \
  --network keycloak-net \
  ivanfranchin/keycloak-clustered:latest start-dev

Testing

In order to test it, have a look at How to check if keycloak-clustered instances are sharing user sessions

Check database

Access sqlcmd terminal inside mssql Docker container

docker exec -it mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P my_Password

Select entries in JGROUPSPING table

1> select * from keycloak.myschema.JGROUPSPING
2> go

To exit sqlcmd terminal type exit or press Ctrl+C

Teardown

To stop keycloak-clustered-1 and keycloak-clustered-2 Docker containers, press Ctrl+C in their terminals;

To remove Docker network, run in a terminal

docker network rm keycloak-net

Running a Keycloak Cluster using JDBC_PING in Virtual Machines

Prerequisites

VirtualBox and Vagrant

Startup

Open a terminal and make sure you are in keycloak-clustered root folder

You can edit Vagrantfile and set the database and/or the discovery protocol to be used

Start the virtual machines by running the command below

vagrant up

Mac Users

If you have an error like the one below

The IP address configured for the host-only network is not within the
allowed ranges. Please update the address used to be within the allowed
ranges and run the command again.

  Address: 10.0.0.1
  Ranges: 123.456.78.0/21

Valid ranges can be modified in the /etc/vbox/networks.conf file. For
more information including valid format see:

  https://www.virtualbox.org/manual/ch06.html#network_hostonly

Create a new file at /etc/vbox/networks.conf on your Mac with content

* 10.0.0.0/8 123.456.78.0/21
* 2001::/64

Wait a bit until the virtual machines get started. It will take some time.

Once the execution of the command vagrant up finishes, we can check the state of all active Vagrant environments

vagrant status

Check keycloak-clustered docker logs in keycloak1 virtual machine

vagrant ssh keycloak1
vagrant@vagrant:~$ docker logs keycloak-clustered -f

Note: To get out of the logging view press Ctrl+C and to exit the virtual machine type exit

Check keycloak-clustered docker logs in keycloak2 virtual machine

vagrant ssh keycloak2
vagrant@vagrant:~$ docker logs keycloak-clustered -f

Note: To get out of the logging view press Ctrl+C and to exit the virtual machine type exit

Check databases if you are using JDBC_PING

vagrant ssh databases

Note: To exit the virtual machine type exit

  • MySQL

    vagrant@vagrant:~$ docker exec -it -e MYSQL_PWD=password mysql mysql -ukeycloak --database keycloak
    mysql> show tables;
    mysql> SELECT * FROM JGROUPSPING;
    

    Note: To exit type exit

  • MariaDB

    vagrant@vagrant:~$ docker exec -it mariadb mysql -ukeycloak -ppassword --database keycloak
    MariaDB [keycloak]> show tables;
    MariaDB [keycloak]> SELECT * FROM JGROUPSPING;
    

    Note: To exit type exit

  • Postgres

    vagrant@vagrant:~$ docker exec -it postgres psql -U keycloak
    keycloak=# \dt *.*
      
    -- `public` schema
    keycloak=# SELECT * FROM JGROUPSPING;
      
    -- in case the schema `myschema` was set
    keycloak=# SELECT * FROM myschema.JGROUPSPING;
    

    Note: To exit type \q

Testing

In order to test it, have a look at How to check if keycloak-clustered instances are sharing user sessions

Using another database

Edit Vagrantfile by setting to DB_VENDOR variable the database to be used

Reload Keycloak virtual machines by running

vagrant reload keycloak1 keycloak2 --provision

Teardown

Suspend the machines

Suspending the virtual machines will stop them and save their current running state. For it run

vagrant suspend

To bring the virtual machines back up run

vagrant up

Halt the machines

Halting the virtual machines will gracefully shut down the guest operating system and power down the guest machine

vagrant halt

It preserves the contents of disk and allows to start it again by running

vagrant up

Destroy the machines

Destroying the virtual machine will remove all traces of the guest machine from your system. It'll stop the guest machine, power it down, and reclaim its disk space and RAM.

vagrant destroy -f

For a complete clean up, you can remove Vagrant box used in this section

vagrant box remove hashicorp/bionic64

Issues

Microsoft SQL Server

WARN  [com.arjuna.ats.jta] (main) ARJUNA016061: TransactionImple.enlistResource - XAResource.start returned: XAException.XAER_RMERR for < formatId=131077, gtrid_length=35, bqual_length=36, tx_uid=0:ffffac160003:a743:65a52f11:0, node_name=quarkus, branch_uid=0:ffffac160003:a743:65a52f11:3f, subordinatenodename=null, eis_name=0 >: javax.transaction.xa.XAException: com.microsoft.sqlserver.jdbc.SQLServerException: Failed to create the XA control connection. Error: "The connection is closed."
	at com.microsoft.sqlserver.jdbc.SQLServerXAResource.DTC_XA_Interface(SQLServerXAResource.java:757)
	at com.microsoft.sqlserver.jdbc.SQLServerXAResource.start(SQLServerXAResource.java:791)
	at io.agroal.narayana.BaseXAResource.start(BaseXAResource.java:150)
	at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.enlistResource(TransactionImple.java:661)
	at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.enlistResource(TransactionImple.java:422)
	at io.agroal.narayana.NarayanaTransactionIntegration.associate(NarayanaTransactionIntegration.java:93)
	at io.agroal.pool.ConnectionPool.getConnection(ConnectionPool.java:252)
	at io.agroal.pool.DataSource.getConnection(DataSource.java:86)
	at io.quarkus.hibernate.orm.runtime.customized.QuarkusConnectionProvider.getConnection(QuarkusConnectionProvider.java:23)
	at org.hibernate.internal.NonContextualJdbcConnectionAccess.obtainConnection(NonContextualJdbcConnectionAccess.java:38)
	at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:113)
	at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getPhysicalConnection(LogicalConnectionManagedImpl.java:143)
	at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.connection(StatementPreparerImpl.java:51)
	at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:150)
	at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:177)
	at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:152)
	at org.hibernate.sql.exec.internal.JdbcSelectExecutorStandardImpl.lambda$list$0(JdbcSelectExecutorStandardImpl.java:102)
	at org.hibernate.sql.results.jdbc.internal.DeferredResultSetAccess.executeQuery(DeferredResultSetAccess.java:226)
	at org.hibernate.sql.results.jdbc.internal.DeferredResultSetAccess.getResultSet(DeferredResultSetAccess.java:163)
	at org.hibernate.sql.results.jdbc.internal.JdbcValuesResultSetImpl.advanceNext(JdbcValuesResultSetImpl.java:254)
	at org.hibernate.sql.results.jdbc.internal.JdbcValuesResultSetImpl.processNext(JdbcValuesResultSetImpl.java:134)
	at org.hibernate.sql.results.jdbc.internal.AbstractJdbcValues.next(AbstractJdbcValues.java:19)
	at org.hibernate.sql.results.internal.RowProcessingStateStandardImpl.next(RowProcessingStateStandardImpl.java:66)
	at org.hibernate.sql.results.spi.ListResultsConsumer.consume(ListResultsConsumer.java:198)
	at org.hibernate.sql.results.spi.ListResultsConsumer.consume(ListResultsConsumer.java:33)
	at org.hibernate.sql.exec.internal.JdbcSelectExecutorStandardImpl.doExecuteQuery(JdbcSelectExecutorStandardImpl.java:361)
	at org.hibernate.sql.exec.internal.JdbcSelectExecutorStandardImpl.executeQuery(JdbcSelectExecutorStandardImpl.java:168)
	at org.hibernate.sql.exec.internal.JdbcSelectExecutorStandardImpl.list(JdbcSelectExecutorStandardImpl.java:93)
	at org.hibernate.sql.exec.spi.JdbcSelectExecutor.list(JdbcSelectExecutor.java:31)
	at org.hibernate.query.sqm.internal.ConcreteSqmSelectQueryPlan.lambda$new$0(ConcreteSqmSelectQueryPlan.java:110)
	at org.hibernate.query.sqm.internal.ConcreteSqmSelectQueryPlan.withCacheableSqmInterpretation(ConcreteSqmSelectQueryPlan.java:303)
	at org.hibernate.query.sqm.internal.ConcreteSqmSelectQueryPlan.performList(ConcreteSqmSelectQueryPlan.java:244)
	at org.hibernate.query.sqm.internal.QuerySqmImpl.doList(QuerySqmImpl.java:518)
	at org.hibernate.query.spi.AbstractSelectionQuery.list(AbstractSelectionQuery.java:367)
	at org.hibernate.query.Query.getResultList(Query.java:119)
	at org.keycloak.models.jpa.MigrationModelAdapter.init(MigrationModelAdapter.java:59)
	at org.keycloak.models.jpa.MigrationModelAdapter.<init>(MigrationModelAdapter.java:42)
	at org.keycloak.models.jpa.JpaRealmProvider.getMigrationModel(JpaRealmProvider.java:99)
	at org.keycloak.storage.datastore.LegacyMigrationManager.migrate(LegacyMigrationManager.java:128)
	at org.keycloak.migration.MigrationModelManager.migrate(MigrationModelManager.java:33)
	at org.keycloak.quarkus.runtime.storage.legacy.database.LegacyJpaConnectionProviderFactory.migrateModel(LegacyJpaConnectionProviderFactory.java:216)
	at org.keycloak.quarkus.runtime.storage.legacy.database.LegacyJpaConnectionProviderFactory.initSchema(LegacyJpaConnectionProviderFactory.java:210)
	at org.keycloak.models.utils.KeycloakModelUtils.lambda$runJobInTransaction$1(KeycloakModelUtils.java:260)
	at org.keycloak.models.utils.KeycloakModelUtils.runJobInTransactionWithResult(KeycloakModelUtils.java:382)
	at org.keycloak.models.utils.KeycloakModelUtils.runJobInTransaction(KeycloakModelUtils.java:259)
	at org.keycloak.quarkus.runtime.storage.legacy.database.LegacyJpaConnectionProviderFactory.postInit(LegacyJpaConnectionProviderFactory.java:135)
	at org.keycloak.quarkus.runtime.integration.QuarkusKeycloakSessionFactory.init(QuarkusKeycloakSessionFactory.java:105)
	at org.keycloak.quarkus.runtime.integration.jaxrs.QuarkusKeycloakApplication.createSessionFactory(QuarkusKeycloakApplication.java:56)
	at org.keycloak.services.resources.KeycloakApplication.startup(KeycloakApplication.java:130)
	at org.keycloak.quarkus.runtime.integration.jaxrs.QuarkusKeycloakApplication.onStartupEvent(QuarkusKeycloakApplication.java:46)
	at org.keycloak.quarkus.runtime.integration.jaxrs.QuarkusKeycloakApplication_Observer_onStartupEvent_67d48587b481b764f44181a34540ebd3d495c2c7.notify(Unknown Source)
	at io.quarkus.arc.impl.EventImpl$Notifier.notifyObservers(EventImpl.java:346)
	at io.quarkus.arc.impl.EventImpl$Notifier.notify(EventImpl.java:328)
	at io.quarkus.arc.impl.EventImpl.fire(EventImpl.java:82)
	at io.quarkus.arc.runtime.ArcRecorder.fireLifecycleEvent(ArcRecorder.java:155)
	at io.quarkus.arc.runtime.ArcRecorder.handleLifecycleEvents(ArcRecorder.java:106)
	at io.quarkus.deployment.steps.LifecycleEventsBuildStep$startupEvent1144526294.deploy_0(Unknown Source)
	at io.quarkus.deployment.steps.LifecycleEventsBuildStep$startupEvent1144526294.deploy(Unknown Source)
	at io.quarkus.runner.ApplicationImpl.doStart(Unknown Source)
	at io.quarkus.runtime.Application.start(Application.java:101)
	at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:111)
	at io.quarkus.runtime.Quarkus.run(Quarkus.java:71)
	at org.keycloak.quarkus.runtime.KeycloakMain.start(KeycloakMain.java:117)
	at org.keycloak.quarkus.runtime.cli.command.AbstractStartCommand.run(AbstractStartCommand.java:33)
	at picocli.CommandLine.executeUserObject(CommandLine.java:2026)
	at picocli.CommandLine.access$1500(CommandLine.java:148)
	at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2461)
	at picocli.CommandLine$RunLast.handle(CommandLine.java:2453)
	at picocli.CommandLine$RunLast.handle(CommandLine.java:2415)
	at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2273)
	at picocli.CommandLine$RunLast.execute(CommandLine.java:2417)
	at picocli.CommandLine.execute(CommandLine.java:2170)
	at org.keycloak.quarkus.runtime.cli.Picocli.parseAndRun(Picocli.java:119)
	at org.keycloak.quarkus.runtime.KeycloakMain.main(KeycloakMain.java:107)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at io.quarkus.bootstrap.runner.QuarkusEntryPoint.doRun(QuarkusEntryPoint.java:61)
	at io.quarkus.bootstrap.runner.QuarkusEntryPoint.main(QuarkusEntryPoint.java:32)

2024-01-15 13:11:45,598 WARN  [com.arjuna.ats.jta] (main) ARJUNA016138: Failed to enlist XA resource io.agroal.narayana.BaseXAResource@70d256e: jakarta.transaction.SystemException: TransactionImple.enlistResource - XAResource.start ARJUNA016054: could not register transaction: < formatId=131077, gtrid_length=35, bqual_length=36, tx_uid=0:ffffac160003:a743:65a52f11:0, node_name=quarkus, branch_uid=0:ffffac160003:a743:65a52f11:3f, subordinatenodename=null, eis_name=0 >
	at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.enlistResource(TransactionImple.java:714)
	at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.enlistResource(TransactionImple.java:422)
	at io.agroal.narayana.NarayanaTransactionIntegration.associate(NarayanaTransactionIntegration.java:93)
	at io.agroal.pool.ConnectionPool.getConnection(ConnectionPool.java:252)
	at io.agroal.pool.DataSource.getConnection(DataSource.java:86)
	at io.quarkus.hibernate.orm.runtime.customized.QuarkusConnectionProvider.getConnection(QuarkusConnectionProvider.java:23)
	at org.hibernate.internal.NonContextualJdbcConnectionAccess.obtainConnection(NonContextualJdbcConnectionAccess.java:38)
	at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:113)
	at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getPhysicalConnection(LogicalConnectionManagedImpl.java:143)
	at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.connection(StatementPreparerImpl.java:51)
	at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:150)
	at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:177)
	at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:152)
	at org.hibernate.sql.exec.internal.JdbcSelectExecutorStandardImpl.lambda$list$0(JdbcSelectExecutorStandardImpl.java:102)
	at org.hibernate.sql.results.jdbc.internal.DeferredResultSetAccess.executeQuery(DeferredResultSetAccess.java:226)
	at org.hibernate.sql.results.jdbc.internal.DeferredResultSetAccess.getResultSet(DeferredResultSetAccess.java:163)
	at org.hibernate.sql.results.jdbc.internal.JdbcValuesResultSetImpl.advanceNext(JdbcValuesResultSetImpl.java:254)
	at org.hibernate.sql.results.jdbc.internal.JdbcValuesResultSetImpl.processNext(JdbcValuesResultSetImpl.java:134)
	at org.hibernate.sql.results.jdbc.internal.AbstractJdbcValues.next(AbstractJdbcValues.java:19)
	at org.hibernate.sql.results.internal.RowProcessingStateStandardImpl.next(RowProcessingStateStandardImpl.java:66)
	at org.hibernate.sql.results.spi.ListResultsConsumer.consume(ListResultsConsumer.java:198)
	at org.hibernate.sql.results.spi.ListResultsConsumer.consume(ListResultsConsumer.java:33)
	at org.hibernate.sql.exec.internal.JdbcSelectExecutorStandardImpl.doExecuteQuery(JdbcSelectExecutorStandardImpl.java:361)
	at org.hibernate.sql.exec.internal.JdbcSelectExecutorStandardImpl.executeQuery(JdbcSelectExecutorStandardImpl.java:168)
	at org.hibernate.sql.exec.internal.JdbcSelectExecutorStandardImpl.list(JdbcSelectExecutorStandardImpl.java:93)
	at org.hibernate.sql.exec.spi.JdbcSelectExecutor.list(JdbcSelectExecutor.java:31)
	at org.hibernate.query.sqm.internal.ConcreteSqmSelectQueryPlan.lambda$new$0(ConcreteSqmSelectQueryPlan.java:110)
	at org.hibernate.query.sqm.internal.ConcreteSqmSelectQueryPlan.withCacheableSqmInterpretation(ConcreteSqmSelectQueryPlan.java:303)
	at org.hibernate.query.sqm.internal.ConcreteSqmSelectQueryPlan.performList(ConcreteSqmSelectQueryPlan.java:244)
	at org.hibernate.query.sqm.internal.QuerySqmImpl.doList(QuerySqmImpl.java:518)
	at org.hibernate.query.spi.AbstractSelectionQuery.list(AbstractSelectionQuery.java:367)
	at org.hibernate.query.Query.getResultList(Query.java:119)
	at org.keycloak.models.jpa.MigrationModelAdapter.init(MigrationModelAdapter.java:59)
	at org.keycloak.models.jpa.MigrationModelAdapter.<init>(MigrationModelAdapter.java:42)
	at org.keycloak.models.jpa.JpaRealmProvider.getMigrationModel(JpaRealmProvider.java:99)
	at org.keycloak.storage.datastore.LegacyMigrationManager.migrate(LegacyMigrationManager.java:128)
	at org.keycloak.migration.MigrationModelManager.migrate(MigrationModelManager.java:33)
	at org.keycloak.quarkus.runtime.storage.legacy.database.LegacyJpaConnectionProviderFactory.migrateModel(LegacyJpaConnectionProviderFactory.java:216)
	at org.keycloak.quarkus.runtime.storage.legacy.database.LegacyJpaConnectionProviderFactory.initSchema(LegacyJpaConnectionProviderFactory.java:210)
	at org.keycloak.models.utils.KeycloakModelUtils.lambda$runJobInTransaction$1(KeycloakModelUtils.java:260)
	at org.keycloak.models.utils.KeycloakModelUtils.runJobInTransactionWithResult(KeycloakModelUtils.java:382)
	at org.keycloak.models.utils.KeycloakModelUtils.runJobInTransaction(KeycloakModelUtils.java:259)
	at org.keycloak.quarkus.runtime.storage.legacy.database.LegacyJpaConnectionProviderFactory.postInit(LegacyJpaConnectionProviderFactory.java:135)
	at org.keycloak.quarkus.runtime.integration.QuarkusKeycloakSessionFactory.init(QuarkusKeycloakSessionFactory.java:105)
	at org.keycloak.quarkus.runtime.integration.jaxrs.QuarkusKeycloakApplication.createSessionFactory(QuarkusKeycloakApplication.java:56)
	at org.keycloak.services.resources.KeycloakApplication.startup(KeycloakApplication.java:130)
	at org.keycloak.quarkus.runtime.integration.jaxrs.QuarkusKeycloakApplication.onStartupEvent(QuarkusKeycloakApplication.java:46)
	at org.keycloak.quarkus.runtime.integration.jaxrs.QuarkusKeycloakApplication_Observer_onStartupEvent_67d48587b481b764f44181a34540ebd3d495c2c7.notify(Unknown Source)
	at io.quarkus.arc.impl.EventImpl$Notifier.notifyObservers(EventImpl.java:346)
	at io.quarkus.arc.impl.EventImpl$Notifier.notify(EventImpl.java:328)
	at io.quarkus.arc.impl.EventImpl.fire(EventImpl.java:82)
	at io.quarkus.arc.runtime.ArcRecorder.fireLifecycleEvent(ArcRecorder.java:155)
	at io.quarkus.arc.runtime.ArcRecorder.handleLifecycleEvents(ArcRecorder.java:106)
	at io.quarkus.deployment.steps.LifecycleEventsBuildStep$startupEvent1144526294.deploy_0(Unknown Source)
	at io.quarkus.deployment.steps.LifecycleEventsBuildStep$startupEvent1144526294.deploy(Unknown Source)
	at io.quarkus.runner.ApplicationImpl.doStart(Unknown Source)
	at io.quarkus.runtime.Application.start(Application.java:101)
	at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:111)
	at io.quarkus.runtime.Quarkus.run(Quarkus.java:71)
	at org.keycloak.quarkus.runtime.KeycloakMain.start(KeycloakMain.java:117)
	at org.keycloak.quarkus.runtime.cli.command.AbstractStartCommand.run(AbstractStartCommand.java:33)
	at picocli.CommandLine.executeUserObject(CommandLine.java:2026)
	at picocli.CommandLine.access$1500(CommandLine.java:148)
	at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2461)
	at picocli.CommandLine$RunLast.handle(CommandLine.java:2453)
	at picocli.CommandLine$RunLast.handle(CommandLine.java:2415)
	at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2273)
	at picocli.CommandLine$RunLast.execute(CommandLine.java:2417)
	at picocli.CommandLine.execute(CommandLine.java:2170)
	at org.keycloak.quarkus.runtime.cli.Picocli.parseAndRun(Picocli.java:119)
	at org.keycloak.quarkus.runtime.KeycloakMain.main(KeycloakMain.java:107)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at io.quarkus.bootstrap.runner.QuarkusEntryPoint.doRun(QuarkusEntryPoint.java:61)
	at io.quarkus.bootstrap.runner.QuarkusEntryPoint.main(QuarkusEntryPoint.java:32)

keycloak-clustered's People

Contributors

ahaerpfer avatar berendwouters avatar blarne avatar chrisbra avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keycloak-clustered's Issues

Unable to delete PingData on container shutdown

I noticed that I have a number of entries in the jgroupsping table under Postgres, even after the container has been shutdown and destroyed. Looking at the docker logs for a shutdown container, I see this:

*** JBossAS process (791) received TERM signal ***
21:38:46,043 INFO  [org.jboss.as.server] (Thread-1) WFLYSRV0272: Suspending server
21:38:46,047 INFO  [org.jboss.as.ejb3] (Thread-1) WFLYEJB0493: Jakarta Enterprise Beans subsystem suspension complete
21:38:46,053 INFO  [org.jboss.as.server] (Thread-1) WFLYSRV0220: Server shutdown has been requested via an OS signal
21:38:46,095 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-3) WFLYJCA0010: Unbound data source [java:jboss/datasources/KeycloakDS]
21:38:46,102 INFO  [org.infinispan.manager.DefaultCacheManager] (ServerService Thread Pool -- 69) Stopping cache manager null on keycloak-dev-template-2021-09-2
21:38:46,106 INFO  [org.infinispan.CLUSTER] (ServerService Thread Pool -- 69) ISPN000080: Disconnecting JGroups channel ejb
21:38:46,110 INFO  [org.jboss.as.mail.extension] (MSC service thread 1-4) WFLYMAIL0002: Unbound mail session [java:jboss/mail/Default]
21:38:46,127 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-2) WFLYUT0008: Undertow HTTPS listener https suspending
21:38:46,118 INFO  [org.infinispan.manager.DefaultCacheManager] (ServerService Thread Pool -- 73) Stopping cache manager null on keycloak-dev-template-2021-09-2
21:38:46,131 INFO  [org.infinispan.CLUSTER] (ServerService Thread Pool -- 73) ISPN000080: Disconnecting JGroups channel ejb
21:38:46,130 INFO  [org.infinispan.manager.DefaultCacheManager] (ServerService Thread Pool -- 74) Stopping cache manager null on keycloak-dev-template-2021-09-2
21:38:46,137 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 69) WFLYUT0022: Unregistered web context: '/auth' from server 'default-server'
21:38:46,140 INFO  [org.infinispan.CLUSTER] (ServerService Thread Pool -- 74) ISPN000080: Disconnecting JGroups channel ejb
21:38:46,153 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-2) WFLYUT0007: Undertow HTTPS listener https stopped, was bound to 10.128.0.44:8443
21:38:46,179 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) WFLYJCA0010: Unbound data source [java:jboss/datasources/ExampleDS]
21:38:46,185 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-3) WFLYUT0019: Host default-host stopping
21:38:46,186 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-1) WFLYUT0008: Undertow AJP listener ajp suspending
21:38:46,189 INFO  [org.jboss.modcluster] (ServerService Thread Pool -- 78) MODCLUSTER000002: Initiating mod_cluster shutdown
21:38:46,187 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-1) WFLYUT0007: Undertow AJP listener ajp stopped, was bound to 10.128.0.44:8009
21:38:46,207 INFO  [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-1) WFLYJCA0019: Stopped Driver service with driver-name = h2
21:38:46,215 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 86) WFLYCLINF0003: Stopped authorization cache from keycloak container
21:38:46,218 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 69) WFLYCLINF0003: Stopped keys cache from keycloak container
21:38:46,234 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 83) WFLYCLINF0003: Stopped users cache from keycloak container
21:38:46,240 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 73) WFLYCLINF0003: Stopped work cache from keycloak container
21:38:46,244 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 74) WFLYCLINF0003: Stopped actionTokens cache from keycloak container
21:38:46,248 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 85) WFLYCLINF0003: Stopped loginFailures cache from keycloak container
21:38:46,251 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 87) WFLYCLINF0003: Stopped sessions cache from keycloak container
21:38:46,254 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 82) WFLYCLINF0003: Stopped clientSessions cache from keycloak container
21:38:46,258 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 80) WFLYCLINF0003: Stopped offlineSessions cache from keycloak container
21:38:46,260 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 84) WFLYCLINF0003: Stopped offlineClientSessions cache from keycloak container
21:38:46,263 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 76) WFLYCLINF0003: Stopped authenticationSessions cache from keycloak container
21:38:46,275 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-2) WFLYSRV0028: Stopped deployment keycloak-server.war (runtime-name: keycloak-server.war) in 214ms
21:38:46,278 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-2) WFLYUT0008: Undertow HTTP listener default suspending
21:38:46,281 INFO  [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-4) WFLYJCA0019: Stopped Driver service with driver-name = postgresql
21:38:46,284 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-2) WFLYUT0007: Undertow HTTP listener default stopped, was bound to 10.128.0.44:8080
21:38:46,298 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-3) WFLYUT0004: Undertow 2.2.5.Final stopping
21:38:46,305 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 81) WFLYCLINF0003: Stopped realms cache from keycloak container
21:38:46,307 INFO  [org.infinispan.manager.DefaultCacheManager] (ServerService Thread Pool -- 81) Stopping cache manager null on keycloak-dev-template-2021-09-2
21:38:46,315 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 83) WFLYCLINF0003: Stopped http-remoting-connector cache from ejb container
21:38:46,316 INFO  [org.infinispan.manager.DefaultCacheManager] (ServerService Thread Pool -- 75) Stopping cache manager null on keycloak-dev-template-2021-09-2
21:38:46,321 INFO  [org.infinispan.CLUSTER] (ServerService Thread Pool -- 75) ISPN000080: Disconnecting JGroups channel ejb
21:38:46,328 INFO  [org.infinispan.CLUSTER] (ServerService Thread Pool -- 81) ISPN000080: Disconnecting JGroups channel ejb
21:38:46,347 ERROR [org.jgroups.protocols.JDBC_PING] (ServerService Thread Pool -- 75) JGRP000115: Could not open connection to database: java.sql.SQLException: javax.resource.ResourceException: IJ000470: You are trying to use a connection factory that has been shut down: java:jboss
/datasources/KeycloakDS
        at [email protected]//org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:159)
        at [email protected]//org.jboss.as.connector.subsystems.datasources.WildFlyDataSource.getConnection(WildFlyDataSource.java:64)
        at [email protected]//org.jgroups.protocols.JDBC_PING.getConnection(JDBC_PING.java:302)
        at [email protected]//org.jgroups.protocols.JDBC_PING.delete(JDBC_PING.java:337)
        at [email protected]//org.jgroups.protocols.JDBC_PING.remove(JDBC_PING.java:175)
        at [email protected]//org.jgroups.protocols.FILE_PING.stop(FILE_PING.java:101)
        at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
        at [email protected]//org.jgroups.stack.ProtocolStack.stopStack(ProtocolStack.java:899)
        at [email protected]//org.jgroups.JChannel.stopStack(JChannel.java:1085)
        at [email protected]//org.jgroups.JChannel.disconnect(JChannel.java:444)
        at [email protected]//org.jboss.as.clustering.jgroups.subsystem.ChannelServiceConfigurator.accept(ChannelServiceConfigurator.java:122)
        at [email protected]//org.jboss.as.clustering.jgroups.subsystem.ChannelServiceConfigurator.accept(ChannelServiceConfigurator.java:58)
        at [email protected]//org.wildfly.clustering.service.FunctionalService.stop(FunctionalService.java:73)
        at [email protected]//org.wildfly.clustering.service.AsyncServiceConfigurator$AsyncService.lambda$stop$1(AsyncServiceConfigurator.java:142)
        at [email protected]//org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
        at [email protected]//org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1990)
        at [email protected]//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
        at [email protected]//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
        at java.base/java.lang.Thread.run(Thread.java:829)
        at [email protected]//org.jboss.threads.JBossThread.run(JBossThread.java:513)
Caused by: javax.resource.ResourceException: IJ000470: You are trying to use a connection factory that has been shut down: java:jboss/datasources/KeycloakDS        at [email protected]//org.jboss.jca.core.connectionmanager.AbstractConnectionManager.allocateConnection(AbstractConnectionManager.java:777)
        at [email protected]//org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:151)
        ... 19 more

21:38:46,351 ERROR [org.jgroups.protocols.JDBC_PING] (ServerService Thread Pool -- 75) JGRP000215: Failed to delete PingData in database
21:38:46,381 INFO  [org.jboss.as] (MSC service thread 1-1) WFLYSRV0050: Keycloak 15.0.2 (WildFly Core 15.0.1.Final) stopped in 320ms
*** JBossAS process (791) received TERM signal ***

Based on this, it looks like [java:jboss/datasources/KeycloakDS] is being shutdown before the final org.jgroups.protocols.JDBC_PING deletion takes place.

STANDALONE CLUSTER UNABLE TO SYNCHRONIZE IMMEDIATELY

Hi, I have setup 2 (two) keycloak machine which is kecloak01, and keycloak02, both of this is connected to another standalone MariaDB server. In testing environment, I created a realm in the primary keycloak (keycloak01) which is TEST01, but in secondary server (keycloak02) does not show TEST01 realm, note that both keycloak is turned on. But when I created another realm in keycloak02 which is TEST02, then the TEST01 realm is visible once after I created the new realm in keycloak02. This is not a problem if the keycloak01 is down and up again, it will get the latest realm/data from the database. Seems that it need to reconnect to the database server again if want to sync the data. My question is, is there a way so that both keycloak server will get the data sync all the time?

JDBC Driver required for JDBC_PING protocol could not be loaded: 'com.mysql.jdbc.Driver' for keycloak-clustered 24.x and 25.x

I've been running keycloak 23.0.7 with mariadb galera cluster without issues with your images.

But when I try running a 24.x or 25.x keycloak image using the same provided infinispan configuration, the keycloak container fails with the error while loading the mysql driver: JDBC Driver required for JDBC_PING protocol could not be loaded: 'com.mysql.jdbc.Driver'

Same with a fresh test configuration as provided in the samples. Here are the commands to run the mariadb and one instance of keycloak-clustered containers:

podman network

podman network create keycloak-net

mariadb container

podman run --rm --name mariadb -p 3306:3306   -e MARIADB_DATABASE=keycloak   -e MARIADB_USER=keycloak   -e MARIADB_PASSWORD=password   -e MARIADB_ROOT_PASSWORD=root_password   --network keycloak-net   mariadb:10.11.6

keycloak container

podman run --rm --name keycloak-clustered-1 -p 8080:8080   -e KEYCLOAK_ADMIN=admin   -e KEYCLOAK_ADMIN_PASSWORD=admin   -e KC_DB=mariadb   -e KC_DB_URL_HOST=mariadb   -e KC_DB_URL_DATABASE=keycloak   -e KC_DB_USERNAME=keycloak   -e KC_DB_PASSWORD=password   -e KC_LOG_LEVEL=INFO,org.infinispan:DEBUG,org.jgroups:DEBUG   -e JGROUPS_DISCOVERY_EXTERNAL_IP=keycloak-clustered-1   --network keycloak-net  ivanfranchin/keycloak-clustered:25.0.1 start-dev

Error log for keycloak container:

2024-07-10 12:12:55,722 DEBUG [org.infinispan.protostream.descriptors.ResolutionContext] (ForkJoinPool.commonPool-worker-1) File resolved successfully : persistence.multimap.proto
2024-07-10 12:12:55,722 INFO  [org.infinispan.CONTAINER] (ForkJoinPool.commonPool-worker-1) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2024-07-10 12:12:56,160 DEBUG [org.jgroups.protocols.TCP] (ForkJoinPool.commonPool-worker-1) thread pool min/max/keep-alive (ms): 0/200/60000
2024-07-10 12:12:56,165 DEBUG [org.jgroups.stack.Configurator] (ForkJoinPool.commonPool-worker-1) set property [email protected]_addr to default value /224.0.75.75
2024-07-10 12:12:56,168 DEBUG [org.jgroups.protocols.JDBC_PING] (ForkJoinPool.commonPool-worker-1) Registering JDBC Driver named 'com.mysql.jdbc.Driver'
2024-07-10 12:12:56,170 ERROR [org.infinispan.CONFIG] (ForkJoinPool.commonPool-worker-1) ISPN000660: DefaultCacheManager start failed, stopping any running components: org.infinispan.commons.CacheConfigurationException: ISPN000541: Error while trying to create a channel using the specified configuration '[TCP(bundler.max_size=64000, sock_conn_timeout=300, linger=-1, thread_pool.keep_alive_time=60000, diag.enabled=false, bind_port=7800, thread_naming_pattern=pl, non_blocking_sends=false, thread_pool.thread_dumps_threshold=10000, send_buf_size=640k, thread_pool.max_threads=200, use_virtual_threads=false, bundler_type=transfer-queue, external_addr=keycloak-clustered-1, thread_pool.min_threads=0), RED(), JDBC_PING(insert_single_sql=INSERT INTO JGROUPSPING (own_addr, cluster_name, bind_addr, updated, ping_data) values (?, ?, 'keycloak-clustered-1', NOW(), ?), connection_driver=com.mysql.jdbc.Driver, delete_single_sql=DELETE FROM JGROUPSPING WHERE own_addr=? AND cluster_name=?, info_writer_sleep_time=500, remove_all_data_on_view_change=true, select_all_pingdata_sql=SELECT ping_data, own_addr, cluster_name FROM JGROUPSPING WHERE cluster_name=?, connection_password=password, connection_url=jdbc:mysql://mariadb:3306/keycloak, initialize_sql=CREATE TABLE IF NOT EXISTS JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, bind_addr varchar(200) NOT NULL, updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, ping_data varbinary(5000) DEFAULT NULL, PRIMARY KEY (own_addr, cluster_name)) ENGINE=InnoDB DEFAULT CHARSET=utf8, clear_sql=DELETE FROM JGROUPSPING WHERE cluster_name=?, connection_username=keycloak), MERGE3(max_interval=30000, min_interval=10000), FD_SOCK2(offset=50000), FD_ALL3(), VERIFY_SUSPECT2(timeout=1000), pbcast.NAKACK2(xmit_interval=200, xmit_table_num_rows=50, resend_last_seqno=true, use_mcast_xmit=false, xmit_table_msgs_per_row=1024, xmit_table_max_compaction_time=30000), UNICAST3(conn_close_timeout=5000, xmit_interval=200, xmit_table_num_rows=50, xmit_table_msgs_per_row=1024, xmit_table_max_compaction_time=30000), pbcast.STABLE(desired_avg_gossip=5000, max_bytes=1M), pbcast.GMS(join_timeout=2000, print_local_addr=false), UFC(min_threshold=0.40, max_credits=4m), MFC(min_threshold=0.40, max_credits=4m), FRAG4(frag_size=60000)]'
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.channelFromConfigurator(JGroupsTransport.java:749)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.buildChannel(JGroupsTransport.java:714)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.initChannel(JGroupsTransport.java:467)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.start(JGroupsTransport.java:451)
	at org.infinispan.remoting.transport.jgroups.CorePackageImpl$2.start(CorePackageImpl.java:64)
	at org.infinispan.remoting.transport.jgroups.CorePackageImpl$2.start(CorePackageImpl.java:49)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl.invokeStart(BasicComponentRegistryImpl.java:616)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl.doStartWrapper(BasicComponentRegistryImpl.java:607)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:576)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl$ComponentWrapper.running(BasicComponentRegistryImpl.java:807)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl.startDependencies(BasicComponentRegistryImpl.java:634)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl.doStartWrapper(BasicComponentRegistryImpl.java:598)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:576)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl$ComponentWrapper.running(BasicComponentRegistryImpl.java:807)
	at org.infinispan.factories.GlobalComponentRegistry.preStart(GlobalComponentRegistry.java:307)
	at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:241)
	at org.infinispan.manager.DefaultCacheManager.internalStart(DefaultCacheManager.java:778)
	at org.infinispan.manager.DefaultCacheManager.start(DefaultCacheManager.java:746)
	at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:412)
	at org.keycloak.quarkus.runtime.storage.legacy.infinispan.CacheManagerFactory.lambda$startEmbeddedCacheManager$3(CacheManagerFactory.java:154)
	at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1768)
	at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1760)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387)
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312)
	at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843)
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808)
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188)
Caused by: java.lang.IllegalArgumentException: JDBC Driver required for JDBC_PING  protocol could not be loaded: 'com.mysql.jdbc.Driver'
	at org.jgroups.protocols.JDBC_PING.loadDriver(JDBC_PING.java:281)
	at org.jgroups.protocols.JDBC_PING.init(JDBC_PING.java:107)
	at org.jgroups.stack.ProtocolStack.initProtocolStack(ProtocolStack.java:807)
	at org.jgroups.stack.ProtocolStack.setup(ProtocolStack.java:443)
	at org.jgroups.JChannel.init(JChannel.java:916)
	at org.jgroups.JChannel.<init>(JChannel.java:128)
	at org.infinispan.remoting.transport.jgroups.EmbeddedJGroupsChannelConfigurator.createChannel(EmbeddedJGroupsChannelConfigurator.java:82)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.channelFromConfigurator(JGroupsTransport.java:747)
	... 26 more

2024-07-10 12:12:56,434 INFO  [com.arjuna.ats.jbossatx] (main) ARJUNA032014: Stopping transaction recovery manager
2024-07-10 12:12:56,443 DEBUG [org.infinispan.quarkus.hibernate.cache.QuarkusInfinispanRegionFactory] (main) Stop region factory
2024-07-10 12:12:56,444 DEBUG [org.infinispan.quarkus.hibernate.cache.QuarkusInfinispanRegionFactory] (main) Clear region references
2024-07-10 12:12:56,473 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
2024-07-10 12:12:56,474 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start embedded or remote cache manager

I've tried building the images myself with provided Dockerfile, but it's the same. While the upstream keycloak images connect to the same mariadb container without problem. I also tried hardcoding the ENV data to the infinispan .xml and deleting the configuration for DB providers that aren't mariadb, resulting in the same problem.

Any ideas?

HA problem

When one Keycloak goes down other side restarts, is it normal behaviour? can we make other stay up and do not restart?

Note: I have healtcheck and autoheal container, will check it maybe because of health endpoint it restarts
Note2: No actually autoheal helps here, like it restarts kc2 and it becomes available in like 1-2 mins, if you dont restart it, it hangs because cant connect ispn1

Unable to start keycloak on 2nd boot

Hi there,

on 2nd boot dwildfly stuck on line: Setting JGroups discovery to TCPPING with properties {initial_hosts=>"192.168.0.138[7600],192.168.0.139[7600]"} and dorker container is exited(1).

The problem is: TCPPING.cli skript failes and any logs is printed to the output.

I Solved this issue by edit TCPPING.cli like this:

embed-server --server-config=standalone-ha.xml --std-out=echo
batch
  /subsystem=infinispan/cache-container=keycloak/distributed-cache=sessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})
  /subsystem=infinispan/cache-container=keycloak/distributed-cache=authenticationSessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})
  /subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineSessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})
  /subsystem=infinispan/cache-container=keycloak/distributed-cache=loginFailures:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})
run-batch

try
  /subsystem=jgroups/stack=udp:remove()
  /subsystem=jgroups/stack=tcp/protocol=MPING:remove()
catch
end-try

try
  /subsystem=jgroups/stack=tcp/protocol=$keycloak_jgroups_discovery_protocol:remove()
catch
finally
  /subsystem=jgroups/stack=tcp/protocol=$keycloak_jgroups_discovery_protocol:add(add-index=0, properties=$keycloak_jgroups_discovery_protocol_properties)
  /subsystem=jgroups/channel=ee:write-attribute(name=stack, value="tcp")

  /subsystem=jgroups/stack=tcp/transport=TCP/property=external_addr/:add(value=${env.JGROUPS_DISCOVERY_EXTERNAL_IP:127.0.0.1})
end-try
stop-embedded-server

Wront Pysical Address of ISPN

Hello, I am trying to setup caching between two keycloak:17.0.0 containers which hosted on two different networks. When keycloak2 tries to connect infinispan on keycloak1 it times out.

keycloak1: (that ip is internal docker network ip)
ISPN000079: Channel ISPN local address is c1e5d04bfd44-19571, physical addresses are `[192.168.48.5:7800]

keycloak2:
[org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) 5e3d9b655522-54955: JOIN(5e3d9b655522-54955) sent to c1e5d04bfd44-19571 timed out (after 2000 ms), on try 0

can I explicitly give physical IP ?

Container won't start after initial run

A little info on my setup...
Have 3 Centos 7 hosts running the latest docker engine from the official docker repository.
Running a mariadb galera cluster in containers across all 3 hosts as the shared database for the keycloak cluster.
Using the latest keycloak image from jboss/keycloak:latest and the JDBC_PING mod to cluster.

my docker run syntax(ips and passwords removed):
docker run -d -p 8443:8443 -p 7600:7600 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=$KC_PASS -e DB_VENDOR=mariadb -e DB_ADDR=$DB_IP -e DB_PORT=32775 -e DB_USER=keycloak -e DB_PASSWORD=$DB_PASS -e DB_DATABASE=keycloak -e JGROUPS_DISCOVERY_EXTERNAL_IP=$EXTERNAL_IP -e JGROUPS_DISCOVERY_PROTOCOL=JDBC_PING -e JGROUPS_DISCOVERY_PROPERTIES=datasource_jndi_name=java:jboss/datasources/KeycloakDS -v /etc/x509/https/tls.crt:/etc/x509/https/tls.crt -v /etc/x509/https/tls.key:/etc/x509/https/tls.key --name keycloak ivanfranchin/keycloak-clustered:latest

When I do the initial docker run command they all start fine. They connect to the database and register themselves in the JGROUPSPING table in the database.
I can log in to each one individually and am able to see 3 different sessions being shared between all of them.
Everything appears to be working correctly.

If I stop a container (docker stop keycloak) and try to restart it, it will not come back up.

The error i see from the docker logs is:

The batch failed with the following error: : WFLYCTL0062: Composite operation failed and was rolled back. Steps that failed: Step: step-9 Operation: /subsystem=datasources/jdbc-driver=mariadb:add(driver-name=mariadb, driver-module-name=org.mariadb.jdbc, driver-xa-datasource-class-name=org.mariadb.jdbc.MySQLDataSource) Failure: WFLYCTL0212: Duplicate resource [ ("subsystem" => "datasources"), ("jdbc-driver" => "mariadb") ]

It looks like Wildfly is building the mariadb datasource again but I don't know how that could even persist in the container after it's stopped.

If I delete the container(docker rm keycloak) and re-run it, it will start and re-join the cluster.

Keycloak clustering issue: No members discovered.

Hi, currently I'am configuring the keycloak for the production. For this I want to run keycloak in cluster mode using TCPPING.

In this I have 2 AWS EC2 servers on which I'am running docker containers for keycloak using the image "ivanfranchin/keycloak-clustered".

I have also added the below environement variables in the docker configuration according to information given in this link https://www.keycloak.org/2019/05/keycloak-cluster-setup:

#IP address of this host, please make sure this IP can be accessed by the other Keycloak instances
JGROUPS_DISCOVERY_EXTERNAL_IP=172.31.140.50
#protocol
JGROUPS_DISCOVERY_PROTOCOL=TCPPING
#IP and Port of all host
JGROUPS_DISCOVERY_PROPERTIES=initial_hosts="172.31.140.50[7600],172.31.140.62[7600]"

image

The problem is the two keycloak containers running on different host servers cannot discover each other. I have also opened up all ports between them for accessing on the AWS. Can you help me on this?

Not able to clusterize keycloak deployed as Azure web app

I tried using JDBC_PING protocol for keycloak clusterization. The setup works fine on local, but we have problems establishing the same on Azure
I am deploying 2 keycloak instances as Azure web apps using shared Azure flexible postgresql DB, latest quay image and provided cache-ispn-jdbc-ping.xml.
In jgroups ping table I am seeing image
I can see that when the app service instances start, I only end up with a single record in the JGROUPSPING table. I can see that each node adds a record with the nodes IP address (private dns zone record in my case) which is then replaced by the next node that starts up.
That would mean that my 2 keycloak instances work as singleton (which I see in the logs) and actually are not clusterized.
Is this some limitation on communication between Azure App Services on port 7600 or something else?

Cluster stress tests.

Hi,

I'm testing postgres and 2 keycloak under docker swarm with traefik as a loadbalancer runnung in the same docker overlay network.

Keycloak 1:

    environment:
      - PROXY_ADDRESS_FORWARDING=true
      - KC_DB=postgres
      - KC_DB_URL_HOST=keycloak-postgres
      - KC_DB_URL_DATABASE=keycloak
      - KC_DB_SCHEMA=clustered_jdbc
      - KC_CACHE_CONFIG_FILE=cache-ispn-jdbc-ping.xml
      - JGROUPS_DISCOVERY_EXTERNAL_IP=keycloak-jdbc1
      - KC_LOG_LEVEL=INFO,org.infinispan:DEBUG,org.jgroups:DEBUG

Keycloak 2:

    environment:
      - PROXY_ADDRESS_FORWARDING=true
      - KC_DB=postgres
      - KC_DB_URL_HOST=keycloak-postgres
      - KC_DB_URL_DATABASE=keycloak
      - KC_DB_SCHEMA=clustered_jdbc
      - KC_CACHE_CONFIG_FILE=cache-ispn-jdbc-ping.xml
      - JGROUPS_DISCOVERY_EXTERNAL_IP=keycloak-jdbc2
      - KC_LOG_LEVEL=INFO,org.infinispan:DEBUG,org.jgroups:DEBUG

JGROUPSPING table

keycloak=# SELECT * FROM clustered_jdbc.JGROUPSPING;
               own_addr               | cluster_name |   bind_addr    |          updated           |                                                ping_data                                                 
--------------------------------------+--------------+----------------+----------------------------+----------------------------------------------------------------------------------------------------------
 b1a481f2-96a9-4cf7-b1d1-c14d0bf38b35 | ISPN         | keycloak-jdbc2 | 2023-01-03 19:05:10.402106 | \x02b1d1c14d0bf38b35b1a481f296a94cf7030100146b6579636c6f616b2d6a646263322d343337353910040a0014b11e78ffff
 c4845a5e-6526-4664-a811-0f90a76b99c1 | ISPN         | keycloak-jdbc2 | 2023-01-03 19:05:10.429296 | \x02a8110f90a76b99c1c4845a5e65264664010100146b6579636c6f616b2d6a646263312d323738393010040a0002ee1e78ffff
(2 rows)

And I'm trying to run stress tests.
It's okay when one keycloak left the cluster (traefik sends requests to "survived" one):

docker service scale common_keycloak-jdbc2=0

Some logs:

Expected behavior:

DEBUG [org.jgroups.protocols.JDBC_PING] (Thread-15) Removed b1a481f2-96a9-4cf7-b1d1-c14d0bf38b35 for cluster ISPN from database
DEBUG [org.jgroups.protocols.JDBC_PING] (Thread-4) Removed b1a481f2-96a9-4cf7-b1d1-c14d0bf38b35 for cluster ISPN from database

Unexpected behavior:

DEBUG [org.jgroups.protocols.JDBC_PING] (jgroups-362,keycloak-jdbc1-27890) Removed c4845a5e-6526-4664-a811-0f90a76b99c1 for cluster ISPN from database
DEBUG [org.jgroups.protocols.JDBC_PING] (jgroups-362,keycloak-jdbc1-27890) Inserted c4845a5e-6526-4664-a811-0f90a76b99c1 for cluster ISPN into database
DEBUG [org.jgroups.protocols.JDBC_PING] (jgroups-362,keycloak-jdbc1-27890) Inserted c4845a5e-6526-4664-a811-0f90a76b99c1 for cluster ISPN into database
DEBUG [org.jgroups.protocols.JDBC_PING] (jgroups-362,keycloak-jdbc1-27890) Removed c4845a5e-6526-4664-a811-0f90a76b99c1 for cluster ISPN from database
DEBUG [org.jgroups.protocols.JDBC_PING] (jgroups-362,keycloak-jdbc1-27890) Removed c4845a5e-6526-4664-a811-0f90a76b99c1 for cluster ISPN from database
DEBUG [org.jgroups.protocols.JDBC_PING] (jgroups-362,keycloak-jdbc1-27890) Inserted c4845a5e-6526-4664-a811-0f90a76b99c1 for cluster ISPN into database

JGROUPSPING table:

keycloak=# SELECT * FROM clustered_jdbc.JGROUPSPING;
               own_addr               | cluster_name |   bind_addr    |          updated          |                                                ping_data                                                 
--------------------------------------+--------------+----------------+---------------------------+----------------------------------------------------------------------------------------------------------
 c4845a5e-6526-4664-a811-0f90a76b99c1 | ISPN         | keycloak-jdbc1 | 2023-01-04 09:37:22.26674 | \x02a8110f90a76b99c1c4845a5e65264664030100146b6579636c6f616b2d6a646263312d323738393010040a0002ee1e78ffff
(1 row)

but when it comes back it tooks a time to reform the cluster:

docker service scale common_keycloak-jdbc2=1

Some logs:

DEBUG [org.jgroups.protocols.TCP] (TQ-Bundler-7,keycloak-jdbc2-35303) JGRP000034: keycloak-jdbc2-35303: failure sending message to keycloak-jdbc1-27890: java.net.ConnectException: Connection refused (Connection refused)
DEBUG [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger-10,keycloak-jdbc2-35303) keycloak-jdbc2-35303: broadcasting suspect(keycloak-jdbc1-27890)
DEBUG [org.jgroups.protocols.FD_SOCK] (jgroups-380,keycloak-jdbc1-27890) keycloak-jdbc1-27890: suspecting [keycloak-jdbc2-35303]
DEBUG [org.jgroups.protocols.FD_SOCK] (jgroups-380,keycloak-jdbc1-27890) keycloak-jdbc1-27890: broadcasting unsuspect(keycloak-jdbc2-35303)
DEBUG [org.jgroups.protocols.FD_SOCK] (jgroups-21,keycloak-jdbc2-35303) keycloak-jdbc2-35303: suspecting [keycloak-jdbc1-27890]
DEBUG [org.jgroups.protocols.FD_SOCK] (jgroups-25,keycloak-jdbc2-35303) keycloak-jdbc2-35303: broadcasting unsuspect(keycloak-jdbc1-27890)
DEBUG [org.jgroups.protocols.FD_SOCK] (jgroups-382,keycloak-jdbc1-27890) keycloak-jdbc1-27890: broadcasting unsuspect(keycloak-jdbc2-35303)
...
a series of removed/inserted (both nodes) for cluster ISPN into database

JGROUPSPING table

keycloak=# SELECT * FROM clustered_jdbc.JGROUPSPING;
               own_addr               | cluster_name |   bind_addr    |          updated           |                                                ping_data                                                 
--------------------------------------+--------------+----------------+----------------------------+----------------------------------------------------------------------------------------------------------
 c4845a5e-6526-4664-a811-0f90a76b99c1 | ISPN         | keycloak-jdbc1 | 2023-01-04 09:48:34.328075 | \x02a8110f90a76b99c1c4845a5e65264664030100146b6579636c6f616b2d6a646263312d323738393010040a0002ee1e78ffff
 262c1029-ab8a-481c-9b51-8e78c1906ef1 | ISPN         | keycloak-jdbc1 | 2023-01-04 09:48:34.347452 | \x029b518e78c1906ef1262c1029ab8a481c010100146b6579636c6f616b2d6a646263322d333533303310040a0014b41e78ffff
(2 rows)

so while it makes all of these connection refuse, suspect/unsuspect, remove/insert the container is already up and running and traefik sends part of requests to unready keyclock instance.

And the worst part is that sometimes the cluster failed to reform, I can see two different bind_addr for the same cluster_name in the JGROUPSPING.

best regards,
Serhiy.

No sharing if second Keycloak instance is deployed on different docker host

No sharing on deploying second Keycloak instance on different docker host, connecting to same DB instance:

E.g.: on node1:

docker network create keycloak-net
docker run --rm --name mariadb -p 3306:3306 \
  -e MYSQL_DATABASE=keycloak \
  -e MYSQL_USER=keycloak \
  -e MYSQL_PASSWORD=password \
  -e MYSQL_ROOT_PASSWORD=root_password \
  --network keycloak-net \
  mariadb:10.7.3
docker run --rm --name keycloak-clustered-1 -p 8080:8080 \
  -e KEYCLOAK_ADMIN=admin \
  -e KEYCLOAK_ADMIN_PASSWORD=admin \
  -e KC_DB=mariadb \
  -e KC_DB_URL_HOST=mariadb \
  -e KC_DB_URL_DATABASE=keycloak \
  -e KC_DB_USERNAME=keycloak \
  -e KC_DB_PASSWORD=password \
  -e KC_LOG_LEVEL=INFO,org.infinispan:DEBUG,org.jgroups:DEBUG \
  -e JGROUPS_DISCOVERY_EXTERNAL_IP=keycloak-clustered-1 \
  --network keycloak-net \
  ivanfranchin/keycloak-clustered:latest start-dev

On node2:

docker network create keycloak-net
docker run --rm --name keycloak-clustered-2 -p 8081:8080 \
  -e KEYCLOAK_ADMIN=admin \
  -e KEYCLOAK_ADMIN_PASSWORD=admin \
  -e KC_DB=mariadb \
  -e KC_DB_URL_HOST=node1 \
  -e KC_DB_URL_DATABASE=keycloak \
  -e KC_DB_USERNAME=keycloak \
  -e KC_DB_PASSWORD=password \
  -e KC_LOG_LEVEL=INFO,org.infinispan:DEBUG,org.jgroups:DEBUG \
  -e JGROUPS_DISCOVERY_EXTERNAL_IP=keycloak-clustered-2 \
  --network keycloak-net \
  ivanfranchin/keycloak-clustered:latest start-dev

Select entries in JGROUPSPING table:

MariaDB [keycloak]> SELECT * FROM JGROUPSPING;
+--------------------------------------+--------------+----------------------+---------------------+---------------------------------------------------+
| own_addr                             | cluster_name | bind_addr            | updated             | ping_data                                         |
+--------------------------------------+--------------+----------------------+---------------------+---------------------------------------------------+
| d67525d8-605f-40da-bacc-db3b0525ef79 | ISPN         | keycloak-clustered-2 | 2022-05-12 15:10:36 | ���;%�y�u%�`_@� fce00d6c5459-41359� x�� |
+--------------------------------------+--------------+----------------------+---------------------+---------------------------------------------------+
1 row in set (0.001 sec)

returns only 1 record. The record inserted by first started instance got replaced by the record inserted by the second instance started after.

A may be (un-)related observation:
Deploying both keycloak instances on the same docker host as described by https://github.com/ivangfr/keycloak-clustered#readme selects 2 records:

MariaDB [keycloak]> SELECT * FROM JGROUPSPING;
+--------------------------------------+--------------+----------------------+---------------------+---------------------------------------------------+
| own_addr                             | cluster_name | bind_addr            | updated             | ping_data                                         |
+--------------------------------------+--------------+----------------------+---------------------+---------------------------------------------------+
| 50225a8d-4cef-4923-90d2-2d7bac85a076 | ISPN         | keycloak-clustered-1 | 2022-05-12 14:49:20 | ��-{���vP"Z�L�I# ac8d1bb0e981-2409� x��  |
| bbc842a4-9b19-4869-a30a-a73d868ef618 | ISPN         | keycloak-clustered-1 | 2022-05-12 14:49:20 | �
�=�����B��Hi ac6ebd1b64e3-47267� x�� |
+--------------------------------------+--------------+----------------------+---------------------+---------------------------------------------------+
2 rows in set (0.001 sec)

with equal bind_addr (=that of the first started keycloak instance) !?

Token Sharing in cluster

Hi,
I was able to get two keycloak containers to run on two different hosts thanks to your medium article.
I wanted to know if this setup would be enough to use the Auth Token of one keycloak instance to access resources from the other? Because that doesn't seem to be happening in my case.
Or is there something more that needs to be configured?

jgroupsping table is created in public schema when DB_SCHEMA is different from DB_USER

Descripition

  • docker image: ivanfranchin/keycloak-clustered:12.0.4
  • database: postgresql 12.0
    • user: hello
    • schema: world
  • command:
docker run --rm --name keycloak1 \
    --net mynetwork --ip 172.18.0.2 \
    -p 8080:8080 \
    -p 8443:8443 \
    -p 8009:8009 \
    -p 9990:9990 \
    -p 7600:7600 \
    -p 57600:57600 \
    -e KEYCLOAK_USER=admin \
    -e KEYCLOAK_PASSWORD=admin \
    -e DB_VENDOR=postgres \
    -e DB_ADDR=192.168.64.2 \
    -e DB_PORT=5432 \
    -e DB_DATABASE=hello \
    -e DB_SCHEMA=world \
    -e DB_USER=hello \
    -e DB_PASSWORD=hello \
    -e JGROUPS_DISCOVERY_PROTOCOL=JDBC_PING \
    -e JGROUPS_DISCOVERY_EXTERNAL_IP=172.18.0.2 \
    -e JGROUPS_DISCOVERY_PROPERTIES=datasource_jndi_name=java:jboss/datasources/KeycloakDS \
    -e PROXY_ADDRESS_FORWARDING=true \
    -e KEYCLOAK_FRONTEND_URL=http://192.168.64.2:8080/auth/ \
    ivanfranchin/keycloak-clustered:12.0.4

Question

If DB_SCHEMA is different from DB_USER, all tables are located in DB_SCHEMA (world for this example), except jgroupsping.

Where is jgroupsping ?
If there is a schema has the same name as DB_USER(hello for this example), jgroupsping will be located in that schema.
If there is no schema named as DB_USER, jgroupsping will be located in public schema.


Requirement or improvement

jgroupsping table should be created in DB_SCHEMA schema.

Wrong Fields in JGROUPSPING table

When I use your prebuilt image (ivanfranchin/keycloak-clustered:latest) i get follow columns in JGROUPSPING table:

               own_addr               | cluster_name | bind_addr |          updated           |                                              ping_data                                               
--------------------------------------+--------------+-----------+----------------------------+------------------------------------------------------------------------------------------------------
 c1b085ca-1b8b-454b-b784-4ef13fbd9d37 | ISPN         | 10.9.9.42 | 2022-10-04 14:11:58.300151 | \x02b7844ef13fbd9d37c1b085ca1b8b454b030100123631626164626536613561352d343630363310040a09092a1e78ffff

But if I try to built my own image with the cache-ispn-jdbc-ping.xml inside the 19.0.2 folder (https://github.com/ivangfr/keycloak-clustered/tree/master/19.0.2), i get the followns results:

               own_addr               | bind_addr |          created           | cluster_name |                                              ping_data                                               
--------------------------------------+-----------+----------------------------+--------------+------------------------------------------------------------------------------------------------------
 16c69e8a-c9aa-49c5-aeb9-94bf47c65266 | 127.0.0.1 | 2022-10-04 14:07:46.437592 | ISPN         | \x02aeb994bf47c6526616c69e8ac9aa49c5030100123638313366656139656565622d35303834321004ac1800021e78ffff

The column "update" don't exist and bind_addr is not equal.

Could you share the same cache-ispn-jdbc-ping.xml used to generate latest image ?

My dockerfile:

FROM quay.io/keycloak/keycloak:19.0.2

COPY cache-ispn-jdbc-ping.xml /opt/keycloak/conf/cache-ispn-jdbc-ping.xml

RUN rm -f /opt/keycloak/conf/cache-ispn.xml

ENV KC_CACHE_CONFIG_FILE=cache-ispn-jdbc-ping.xml
ENV KC_DB=postgres

RUN /opt/keycloak/bin/kc.sh build --db=postgres
RUN /opt/keycloak/bin/kc.sh show-config

ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]

Cluster members don't discover each other on initial boot, but all works well after one of the containers is restarted

I am using a cluster in "standalone mode" with 2 Keycloak nodes running on different AWS EC2 machines (from different availability zones). The 2 Keycloak instances can reach each other on port 7600 via the Docker host IPs).
I was able to mount the latest TCPPING.cli script to the latest official Keycloak image (11.0.2) instead of using this custom Keycloak image (which is very great inspiration and which I also used to experiment with TCPPING and JDBCPING!). The cluster works as expected.

I encounter 1 problem though with the initial pairing of the cluster members: my experience is that in order for the cluster nodes to initially discover each other, a restart of one of the containers is necessary. More exactly: 2 newly created Keycloak containers won't discover each other until I restart one of them. After this initial pairing all works as expected, but this is a bit annoying for the initial run of the Keycloak cluster in all new environments and will also require extra restarts after each Keycloak upgrade.

I tried to use a lighter alternative to restarting Keycloak: executing a Wildfly reload using /opt/jboss/keycloak/bin/jboss-cli.sh --connect --command=":reload"(which would have been easy to add to TCPPING.cli), but this is not sufficient.

I experienced the same with your image as well. Let me know if you have a solution to this problem or in case you didn't encounter it at all.

Access token can not be shared.

Hello, Could you help me check this issue?
Based on what you provided, I tried and found that although the session can be shared, the access token cannot be shared. Is this a problem caused by my configuration such as cache-ispn-jdbc-ping.xml?

docker run --name keycloak-1 -d -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -e KC_DB=mysql -e KC_DB_URL=jdbc:mysql://192.168.56.107:3306/keycloak -e KC_DB_USERNAME=root -e KC_DB_PASSWORD=secret -e JGROUPS_DISCOVERY_EXTERNAL_IP=keycloak-1 -e KC_CACHE_CONFIG_FILE=cache-ispn-jdbc-ping.xml -v ${PWD}/cache-ispn-jdbc-ping.xml:/opt/keycloak/conf/cache-ispn-jdbc-ping.xml --network keycloak-net quay.io/keycloak/keycloak:22.0.5 start --auto-build --http-enabled=true --hostname-strict-backchannel=false --hostname-strict=false --https-client-auth=none --proxy=edge --metrics-enabled=true

docker run --name keycloak-2 -d -p 8081:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -e KC_DB=mysql -e KC_DB_URL=jdbc:mysql://192.168.56.107:3306/keycloak -e KC_DB_USERNAME=root -e KC_DB_PASSWORD=secret -e JGROUPS_DISCOVERY_EXTERNAL_IP=keycloak-2 -e KC_CACHE_CONFIG_FILE=cache-ispn-jdbc-ping.xml -v ${PWD}/cache-ispn-jdbc-ping.xml:/opt/keycloak/conf/cache-ispn-jdbc-ping.xml --network keycloak-net quay.io/keycloak/keycloak:22.0.5 start --auto-build --http-enabled=true --hostname-strict-backchannel=false --hostname-strict=false --https-client-auth=none --proxy=edge --metrics-enabled=true

step1: get the access token from 8080
curl -X POST http://192.168.56.107:8080/realms/quickstart/protocol/openid-connect/token
-H 'content-type: application/x-www-form-urlencoded'
-d 'client_id=authz-servlet&client_secret=secret'
-d 'username=alice&password=alice&grant_type=password' | jq --raw-output '.access_token'

step2: get the RPT from 8081, but it returns null.
curl -X POST http://192.168.56.107:8081/realms/quickstart/protocol/openid-connect/token
-H "Authorization: Bearer ${access_token}"
--data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket"
--data "audience=authz-servlet"

this is my cache-ispn-jdbc-ping.xml:

<jgroups>
    <stack name="postgres-jdbc-ping-tcp" extends="tcp">
        <TCP external_addr="${env.JGROUPS_DISCOVERY_EXTERNAL_IP:127.0.0.1}" />
        <JDBC_PING
            connection_driver="com.mysql.cj.jdbc.Driver"
            connection_username="root"
            connection_password="secret"
            connection_url="jdbc:mysql://192.168.56.107:3306/keycloak"
            initialize_sql="CREATE TABLE IF NOT EXISTS JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, bind_addr varchar(200) NOT NULL, updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, ping_data varbinary(5000) DEFAULT NULL, PRIMARY KEY (own_addr, cluster_name)) ENGINE=InnoDB DEFAULT CHARSET=utf8;"
            insert_single_sql="INSERT INTO JGROUPSPING (own_addr, cluster_name, bind_addr, updated, ping_data) values (?, ?, '${env.JGROUPS_DISCOVERY_EXTERNAL_IP:127.0.0.1}', NOW(), ?);"
            delete_single_sql="DELETE FROM JGROUPSPING WHERE own_addr=? AND cluster_name=?;"
            select_all_pingdata_sql="SELECT ping_data, own_addr, cluster_name FROM JGROUPSPING WHERE cluster_name=?;"
            info_writer_sleep_time="500"
            remove_all_data_on_view_change="true"
            stack.combine="REPLACE"
            stack.position="MPING"
        />
    </stack>
</jgroups>

<cache-container name="keycloak">
    <transport lock-timeout="60000" stack="postgres-jdbc-ping-tcp"/>
    <local-cache name="realms" simple-cache="true">
        <encoding>
            <key media-type="application/x-java-object"/>
            <value media-type="application/x-java-object"/>
        </encoding>
        <memory max-count="10000"/>
    </local-cache>
    <local-cache name="users" simple-cache="true">
        <encoding>
            <key media-type="application/x-java-object"/>
            <value media-type="application/x-java-object"/>
        </encoding>
        <memory max-count="10000"/>
    </local-cache>
    <distributed-cache name="sessions" owners="2">
        <expiration lifespan="-1"/>
    </distributed-cache>
    <distributed-cache name="authenticationSessions" owners="2">
        <expiration lifespan="-1"/>
    </distributed-cache>
    <distributed-cache name="offlineSessions" owners="2">
        <expiration lifespan="-1"/>
    </distributed-cache>
    <distributed-cache name="clientSessions" owners="2">
        <expiration lifespan="-1"/>
    </distributed-cache>
    <distributed-cache name="offlineClientSessions" owners="2">
        <expiration lifespan="-1"/>
    </distributed-cache>
    <distributed-cache name="loginFailures" owners="2">
        <expiration lifespan="-1"/>
    </distributed-cache>
    <local-cache name="authorization" simple-cache="true">
        <encoding>
            <key media-type="application/x-java-object"/>
            <value media-type="application/x-java-object"/>
        </encoding>
        <memory max-count="10000"/>
    </local-cache>
    <replicated-cache name="work">
        <expiration lifespan="-1"/>
    </replicated-cache>
    <local-cache name="keys" simple-cache="true">
        <encoding>
            <key media-type="application/x-java-object"/>
            <value media-type="application/x-java-object"/>
        </encoding>
        <expiration max-idle="3600000"/>
        <memory max-count="1000"/>
    </local-cache>
    <distributed-cache name="actionTokens" owners="2">
        <encoding>
            <key media-type="application/x-java-object"/>
            <value media-type="application/x-java-object"/>
        </encoding>
        <expiration max-idle="-1" lifespan="-1" interval="300000"/>
        <memory max-count="-1"/>
    </distributed-cache>
</cache-container>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.