Giter Club home page Giter Club logo

uyuni-project / uyuni Goto Github PK

View Code? Open in Web Editor NEW
403.0 39.0 168.0 1.26 GB

Source code for Uyuni

Home Page: https://www.uyuni-project.org/

License: GNU General Public License v2.0

Makefile 0.14% Python 14.92% Shell 1.32% Roff 0.05% CSS 0.03% Java 59.10% C 0.01% HTML 0.02% Perl 0.97% sed 0.01% XSLT 0.01% JavaScript 12.21% Dockerfile 0.08% PLSQL 1.33% PLpgSQL 5.54% Genshi 0.01% SaltStack 0.28% Ruby 1.44% Gherkin 2.39% Less 0.17%
system-management linux java python reactjs saltstack cucumber uyuni suse-manager spacewalk

uyuni's Introduction

uyuni's People

Contributors

adelton avatar aronparsons avatar bischoff avatar cbbayburt avatar cbosdo avatar dgoodwin avatar dmacvicar avatar ggainey avatar hustodemon avatar jdobes avatar jlsherrill avatar juliogonzalez avatar lucidd avatar mallozup avatar mantel avatar mbologna avatar mcalmer avatar mccun934 avatar meaksh avatar michaelmraka avatar moio avatar mzazrivec avatar ncounter avatar parlt91 avatar parthaa avatar renner avatar sdherr avatar tkasparek avatar tlestach avatar xsuchy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

uyuni's Issues

Add possibility to automate initial organization/user creation

Is your feature request related to a problem? Please describe.
I'm currently writing a Chef cookbook for automating Uyuni test deployments.
After installing Uyuni, the first organization still needs to be created using
the web page.

I understand that there is an automation tool spacecmd, but this seems not to
work if no organization or admin user exist:

# spacecmd org_create
Spacewalk Username:
Spacewalk Password:
ERROR: Invalid credentials

I figured out that entering no username/password does not work (also not via
the .spacecmd/config file
) - so therefore the error message seems legit.

Describe the solution you'd like
It would be great if spacecmd would work without username/password for the first setup or supply a dedicated command for this, e.g. spacecmd initial_setup

Describe alternatives you've considered
As a workaround, I'm using curl to get the CSRF token and send a POST request to the appropriate script:

csrf="$(curl --insecure https://localhost/rhn/newlogin/CreateFirstUser.do |
grep csrf | grep -o 'value=.*' | tr -d 'a-zA-Z">=')"
curl --insecure -d
"csrf_token=$csrf&submitted=true&orgName=demoOrg&login=admin&desiredpassword=Mimimi01&desiredpasswordConfirm=Mimimi01&email=info@xxxxxxxxxxxx&firstNames=Paula&lastName=Pinkepank"
-X POST https://localhost/rhn/newlogin/CreateFirstUser.do

Additional context
See also the discussion on the mailing list: https://lists.opensuse.org/uyuni-devel/2018-11/msg00009.html

More readable kiwi error messages

When a kiwi build fails, it's pretty hard to spot the error within the huge json result of the kiwi build state. It would be great to extract the stderr and format it properly for the user to easily spot the problem.

A wild guess would be that this could be implemented in SaltUtils.handleImageBuildData()

Centos 7 bootstrap on Dev Release

Centos 7 Fails to bootstrap out of the box with 4.0RC1 (Dev release) installed on Leap 15.1

It seems that you need epel-release on the centos machine for the salt minion to install

Also /etc/salt/minion.d doesn't appear to get created, creating the directory manually seems to fix the issue

terraform v12/migration errrors

Hi I'm running locally terraformV12 with a custom provider libvirt builded against TF12,

It seems that terraform has now new reserved vars. HTH

Error: Reserved argument name in module block

  on main.tf line 40, in module "client":
  40:   count = 1

The name "count" is reserved for use in a future version of Terraform.


Error: Reserved argument name in module block

  on main.tf line 51, in module "minion":
  51:   count = 1

The name "count" is reserved for use in a future version of Terraform.

Uyuni should work without SCC credentials

Using master and Leap15.0, and launching mgr-sync-refresh without adding SCC credentials:

2019-05-10 12:28:18,043 [DefaultQuartzScheduler_Worker-8] ERROR com.redhat.rhn.taskomatic.task.MgrSyncRefresh - Error during mgr-sync refresh
com.redhat.rhn.manager.content.ContentSyncException: No SCC credentials found.
        at com.redhat.rhn.manager.content.ContentSyncManager.filterCredentials(ContentSyncManager.java:227)
        at com.redhat.rhn.manager.content.ContentSyncManager.getProducts(ContentSyncManager.java:241)
        at com.redhat.rhn.taskomatic.task.MgrSyncRefresh.execute(MgrSyncRefresh.java:97)
        at com.redhat.rhn.taskomatic.task.RhnJavaJob.execute(RhnJavaJob.java:88)
        at com.redhat.rhn.taskomatic.TaskoJob.execute(TaskoJob.java:199)
        at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)

Having Uyuni users without SUSE subscriptions is a valid scenario.

As a workaround until this is fixed, creating a SCC account without subscriptions should fix the issue.

Mail reports still show SUSE Manager at some places

Subject: Uyuni Patch Alert: SUSE-12-SP3-2018-2451 - Recommended update for branding-SLE
 From: " SUSE Manager (uyuni-server.suse.inet) " [email protected]
To: XXXXXXXXXXXXXXXXX
Date: 25/10/18 12:29

Uyuni has determined that the following advisory is applicable to
one or more of the systems you have registered:

Complete information about this patch can be found at the following location:
https://uyuni-server.suse.inet/rhn/errata/details/Details.do?eid=2590

Bug Fix Advisory - SUSE-12-SP3-2018-2451

Summary:
Recommended update for branding-SLE

This update for branding-SLE fixes the following issues:

  • Parse "\n" in plymouth theme text to new lines (bsc#1083702)


Affected Systems List

This Patch Advisory may apply to the systems listed below. If you know that
this patch does not apply to a system listed, it might be possible that the
package profile for that server is out of date. In that case you should run
'up2date -p' (RHEL 4 and below) or 'rhn-profile-sync' (SLES, RHEL 5 and above)
as root on the system in question to refresh your software profile.

There is 1 affected system registered in 'Overview' (only systems for
which you have explicitly enabled Patch Alerts are shown).

Release Arch Profile Name


12.3 x86_64 192.168.3.91

You may address the issues outlined in this advisory in two ways:

 - select your server name by clicking on its name from the list
   available at the following location, and then schedule a
   patch update for it:
       https://uyuni-server.suse.inet/rhn/systems/Overview.do

 - run the Update Agent on each affected server.

Changing Notification Preferences

To enable/disable your Patch Alert preferences globally please log in to Uyuni
and navigate from "Overview" / "Your Account" to the "Preferences" tab.

    URL: https://uyuni-server.suse.inet/rhn/account/UserPreferences.do

You can also enable/disable notification on a per system basis by selecting an
individual system from the "Systems List". From the individual system view
click the "Details" tab.

--the SUSE Manager Team

Account Information:
Your SUSE Manager login: admin
Your SUSE Manager email address: XXXXXXXXXXXXXXXXX

Implement correct mapping of packages reported by 'profileupdate' for Ubuntu/Debian

The package information reported by packages.profileupdate for Ubuntu/Debian differs from SUSE OSs. As a result, for example, package architectures cannot be mapped by Uyuni correctly, therefore having problems to display package information, finding out package updates, etc.

Original findings by @rpasche:

A further "problem" is the architecture. The architecutre returned from a x86_64 Debian system is
amd64. But within Uyuni (and Spacewalk), x86_64 Debian systems have an architecture name of
amd64-deb.

In my tests, I still have a problem, when I do NOT modify the architecture. If I return the original architecture (here, amd64 for a x86_64 Debian system), I won't get package update suggestions, as the system will be "up-to-date" within Uyuni (also the software packages links are broken). But if I modify the architecture (and append -deb, so it returns amd64-deb) to the original architecture, I see possible software packages updates and the software package links are correct.

Although we can choose to handle this on different levels (states, salt modules, etc.), I belive the most convenient place would be the Java code, since we often tend to do this kind of mappings in there.

The goal of this task should be to add necessary mappings to the profile update handling code in Java for package architectures (e.g. amd64 to amd64-deb) and identify if any other attributes need to be mapped and implement accordingly.

A good place to start would be the following entry points in Java:

Checklist

  • Map package architectures on profile update amd64 -> amd64-deb
  • Fix version definitons when calling package states (install, update, remove, etc.)

Links

Package dependency error in spacewalk-backend-tools on openSUSE 42.3?

Just testing improvments for Ubuntu / Debian clients and rebuilt spacewalk-backend, spacewalk-config and susemanager-sls packages from ubuntu-support branch.

Trying to install resulting spacewalk-backend-tools-4.0.7 packages results in dependency error on openSUSE 42.3.

Is this correct?

uyuni-test:~/ubuntu-support # rpm -Uvh *.rpm
error: Failed dependencies:
python2-urlgrabber is needed by spacewalk-backend-tools-4.0.7-1.noarch

I only got "python-urlgrabber"

uyuni-test:~/ubuntu-support # rpm -q --info python-urlgrabber
Name : python-urlgrabber
Version : 3.9.1
Release : 16.4
Architecture: noarch
Install Date: Fri Oct 5 11:33:06 2018
Group : Development/Languages/Python
Size : 304495
License : LGPL-2.1
Signature : RSA/SHA256, Fri May 19 20:32:55 2017, Key ID b88b2fd43dbdc284
Source RPM : python-urlgrabber-3.9.1-16.4.src.rpm
Build Date : Fri May 19 20:32:46 2017
Build Host : cloud103
Relocations : (not relocatable)
Packager : http://bugs.opensuse.org
Vendor : openSUSE
URL : http://urlgrabber.baseurl.org
Summary : A high-level cross-protocol url-grabber
Description :
A high-level cross-protocol url-grabber for python supporting HTTP, FTP
and file locations. Features include keepalive, byte ranges,
throttling, authentication, proxies and more.
Distribution: openSUSE Leap 42.3

This change has been introducted in commit a839b86

Just want to be sure, if this is maybe a mistage within the spec file or if there really is another urlgrabber python package.

Machines Bootstrapped over-riding old ones

While registering machines i am getting some machines overriding older ones already registered.
I've checked system uuid's, ssh keys just about anything i can remember has anyone seen this behaviour ?

Repos data not updated for minions

After sync the repos, minions not can access new packages.

I check repos data in /var/cache/rhn/repodata// and not updated

in /var/log/rhn/rhn_taskomatic_daemon.log I see:

2019-04-20 15:01:00,113 [Thread-69] WARN org.hibernate.engine.jdbc.spi.SqlExceptionHelper - SQL Error: 0, SQLState: 42703
2019-04-20 15:01:00,113 [Thread-69] ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper - ERROR: column softwareen0_.status does not exist
Position: 74

Any sugestion? migration error (after 16.Apr.2019)

Thanks

mgr-websockify service does not start. Referencing missing JWTTokenApi

On Leap15.0, running Uyuni Server, the service mgr-websockify does not start. It references a missing module / class JWTTokenApi. This should be included in package python3-websockify-0.8.0, but the token_plugin.py file is missing the requested class.

Apr 03 16:46:47 uyuni-test systemd[1]: Starting TCP to WebSocket proxy...
Apr 03 16:46:48 uyuni-test systemd[1]: Started TCP to WebSocket proxy.
Apr 03 16:46:49 uyuni-test websockify[1011]: Traceback (most recent call last):
Apr 03 16:46:49 uyuni-test websockify[1011]:   File "/usr/bin/websockify", line 11, in <module>
Apr 03 16:46:49 uyuni-test websockify[1011]:     load_entry_point('websockify==0.8.0', 'console_scripts', 'websockify')()
Apr 03 16:46:49 uyuni-test websockify[1011]:   File "/usr/lib/python3.6/site-packages/websockify/websocketproxy.py", line 511, in websockify_init
Apr 03 16:46:49 uyuni-test websockify[1011]:     token_plugin_cls = getattr(sys.modules[token_plugin_module], token_plugin_cls)
Apr 03 16:46:49 uyuni-test websockify[1011]: AttributeError: module 'websockify.token_plugins' has no attribute 'JWTTokenApi'
Apr 03 16:46:49 uyuni-test systemd[1]: mgr-websockify.service: Main process exited, code=exited, status=1/FAILURE
Apr 03 16:46:49 uyuni-test systemd[1]: mgr-websockify.service: Unit entered failed state.
Apr 03 16:46:49 uyuni-test systemd[1]: mgr-websockify.service: Failed with result 'exit-code'.

The mgr-websockify service was introduced in c2f8faf.

The token_plugin.py file is missing the class, that has been introduced upstream in novnc/websockify@f2031ef

uyuni-test:~ # rpm -q --info python3-websockify
Name        : python3-websockify
Version     : 0.8.0
Release     : lp150.1.3
Architecture: noarch
Install Date: Wed Apr  3 09:06:17 2019
Group       : Development/Languages/Python
Size        : 194034
License     : LGPL-3.0 and MPL-2.0 and BSD-2-Clause and BSD-3-Clause
Signature   : RSA/SHA256, Tue Feb 20 00:00:12 2018, Key ID b88b2fd43dbdc284
Source RPM  : python-websockify-0.8.0-lp150.1.3.src.rpm
Build Date  : Mon Oct  2 14:00:00 2017
Build Host  : lamb67
Relocations : (not relocatable)
Packager    : https://bugs.opensuse.org
Vendor      : openSUSE
URL         : https://github.com/kanaka/websockify
Summary     : WebSocket to TCP proxy/bridge
Description :
websockify was formerly named wsproxy and was part of the
noVNC project.

At the most basic level, websockify just translates WebSockets traffic
to normal socket traffic. Websockify accepts the WebSockets handshake,
parses it, and then begins forwarding traffic between the client and
the target in both directions.
Distribution: openSUSE Leap 15.0
uyuni-test:~ #

spacewalk-common-channels traceback

USER@uyuni:~> sudo spacewalk-common-channels -u USER 'opensuse_leap15_0*'
SUSE Manager password:
Traceback (most recent call last):
File "/usr/bin/spacewalk-common-channels", line 229, in
add_channels(channels, section, arch, client)
File "/usr/bin/spacewalk-common-channels", line 92, in add_channels
config.get(section, 'base_channels'), 1)
File "/usr/lib64/python3.6/configparser.py", line 800, in get
d)
File "/usr/lib64/python3.6/configparser.py", line 394, in before_get
self._interpolate_some(parser, option, L, value, section, defaults, 1)
File "/usr/lib64/python3.6/configparser.py", line 434, in _interpolate_some
option, section, rawval, var) from None
configparser.InterpolationMissingOptionError: Bad value substitution: option 'base_channels' in section 'fedora27-updates' contains an interpolation key 'arch' which is not a valid option name. Raw value: 'fedora27-%(arch)s'

RHEL 8 content cannot be correctly synced in a usable form in Uyuni

My current experiments with Uyuni with managing RHEL 8 content indicate that Uyuni cannot correctly sync in the content and offer working channels to RHEL 8 clients.

This is unfortunately largely due to how RH/Fedora modules work. Among other things, when the package manager isn't module-aware, the repository looks like nonsense and strange cross-sections of packages are pulled in and made available, which breaks the usage of modules.

Moreover, it's not easy to deal with modules directly right now, as the behavior is not fully specified for alternative implementations right now and currently remains in flux. The recommended way to interface with modules is through the DNF package manager API.

For this and other reasons, I've filed uyuni-project/uyuni-rfc#4 to propose moving from Zypper to DNF to fix this.

Moving to LEAP 15?

Hi,

I just wanted to try out uyuni, but it i asks me to use Leap 42.3 is it planned that the project moves forward to Leap 15?

Add API Doc versioning Section to API Doc Sources

We should add an API Doc version chapter at the beginning of the generated pdf. I believe this means we need to update the javadoc xml sources somewhere.

I am really not sure where we edit the javadoc stuff. But I am sure we would need to define the db4 xml section, and insert a variable containing the api doc version string.

Error during applying the bootstrap state, message: Response code: 500

Hi, I am very new to Uyuni and Linux in general. I followed the setup guide however I cannot add a SLES12 SP3 client to Uyuni using the web UI due to the error: Error during applying the bootstrap state, message: Response code: 500

The hostnames are set correctly and pingable and an activation key was generated.

Am I required to manually install Salt master on the Uyuni server and salt minion on the client I am trying to add? Thanks.

This is what I am seeing in /var/log/rhn/rhn_web_ui.log:

2019-06-11 12:39:26,942 [ajp-apr-0:0:0:0:0:0:0:1-8009-exec-2] ERROR com.suse.manager.webui.controllers.utils.AbstractMinionBootstrapper - Exception during bootstrap: Response code: 500 com.suse.salt.netapi.exception.SaltException: Response code: 500 at com.suse.salt.netapi.client.impl.HttpAsyncClientImpl.createSaltException(HttpAsyncClientImpl.java:145) at com.suse.salt.netapi.client.impl.HttpAsyncClientImpl.access$000(HttpAsyncClientImpl.java:27) at com.suse.salt.netapi.client.impl.HttpAsyncClientImpl$1.completed(HttpAsyncClientImpl.java:121) at com.suse.salt.netapi.client.impl.HttpAsyncClientImpl$1.completed(HttpAsyncClientImpl.java:101) at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:119) at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:181) at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:412) at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:305) at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:267) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39) at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:116) at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:164) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:339) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:317) at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:278) at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:106) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:590) at java.lang.Thread.run(Thread.java:748)

Fix unchecked types warnings.

There is a lot of java code that doesn't use the generic types... and this generates thousands of warnings. For sure this task is a one that will take time, but can be easily split.

Errors when installing Uyuni Server

Using master packages:

   Error output
   ┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐  
   │SQLite header and source version mismatch                                                                                                                                                                                                                              │  
   │2018-04-10 17:39:29 4bb2294022060e61de7da5c227a69ccd846ba330e31626ebcd59a94efd148b3b                                                                                                                                                                                   │  
   │2019-02-25 16:06:06 bd49a8271d650fa89e446b42e513b595a717b9212c91dd384aab871fc1d0f6d7                                                                                                                                                                                   │  
   │SQLite header and source version mismatch                                                                                                                                                                                                                              │  
   │2018-04-10 17:39:29 4bb2294022060e61de7da5c227a69ccd846ba330e31626ebcd59a94efd148b3b                                                                                                                                                                                   │  
   │2019-02-25 16:06:06 bd49a8271d650fa89e446b42e513b595a717b9212c91dd384aab871fc1d0f6d7                                                                                                                                                                                   │  
   │chown: Cannot access '/var/lib/jabberd/db/sqlite.db': No such file or directory

I can't see jabberd failing to start:

 Setup script output
   ┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐  
   │*** Progress: #                                                                                                                                                                                                                                                        │  
   │* Configuring tomcat.                                                                                                                                                                                                                                                  │  
   │* Setting up users and groups.                                                                                                                                                                                                                                         │  
   │** GPG: Initializing GPG and importing key.                                                                                                                                                                                                                            │  
   │* Performing initial configuration.                                                                                                                                                                                                                                    │  
   │* Configuring apache SSL virtual host.                                                                                                                                                                                                                                 │  
   │** /etc/apache2/vhosts.d/vhost-ssl.conf has been backed up to vhost-ssl.conf-swsave                                                                                                                                                                                    │  
   │* Configuring jabberd.                                                                                                                                                                                                                                                 │  
   │* Creating SSL certificates.                                                                                                                                                                                                                                           │  
   │** SSL: Generating CA certificate.                                                                                                                                                                                                                                     │  
   │** SSL: Deploying CA certificate.                                                                                                                                                                                                                                      │  
   │** SSL: Generating server certificate.                                                                                                                                                                                                                                 │  
   │** SSL: Storing SSL certificates.                                                                                                                                                                                                                                      ┬  
   │* Deploying configuration files.                                                                                                                                                                                                                                       │  
   │* Update configuration in database.                                                                                                                                                                                                                                    │  
   │* Setting up Cobbler..                                                                                                                                                                                                                                                 │  
   │Created symlink /etc/systemd/system/sockets.target.wants/tftp.socket → /usr/lib/systemd/system/tftp.socket.                                                                                                                                                            │  
   │* Restarting services.                                                                                                                                                                                                                                                 │  
   │Installation complete.                                                                                                                                                                                                                                                 │  
   │Visit https://uyuni-server.local.inet to create the Spacewalk administrator account.                                                                                                                                                                                   │  
   │INFO: Database configuration has been changed.                                                                                                                                                                                                                         │  
   │INFO: Wrote new general configuration. Backup as /var/lib/pgsql/data/postgresql.2019-05-10-11-25-41.conf                                                                                                                                                               │  
   │INFO: Wrote new client auth configuration. Backup as /var/lib/pgsql/data/pg_hba.2019-05-10-11-25-41.conf                                                                                                                                                               │  
   │Database is online                                                                                                                                                                                                                                                     │  
   │System check finished                                                                                                                                                                                                                                                  │  
   │Shutting down spacewalk services...                                                                                                                                                                                                                                    │  
   │Done.                                                                                                                                                                                                                                                                  │  
   │Starting spacewalk services...                                                                                                                                                                                                                                         │  
   │Done.                                                                                                                                                                                                                                                                  ┴  
   └───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘  
uyuni-server:~ # spacewalk-service status
● jabberd.service - Jabber Server
   Loaded: loaded (/usr/lib/systemd/system/jabberd.service; enabled; vendor preset: disabled)
   Active: active (exited) since Fri 2019-05-10 11:25:47 CEST; 2min 36s ago
  Process: 8852 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 8852 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   CGroup: /system.slice/jabberd.service

But jabber doesn't work fine anyway:

2019/05/10 12:16:04 +02:00 8866 0.0.0.0: osad/jabber_lib.__init__
2019/05/10 12:16:04 +02:00 8866 0.0.0.0: osad/jabber_lib.setup_connection('Connected to jabber server', 'uyuni-server.local.inet')
2019/05/10 12:16:04 +02:00 8866 0.0.0.0: osad/jabber_lib.main('ERROR', 'Error caught:')
2019/05/10 12:16:04 +02:00 8866 0.0.0.0: osad/jabber_lib.main('ERROR', 'Traceback (most recent call last):\n  File "/usr/lib/python3.6/site-packages/osad/jabber_lib.py", line 888, in auth\n    auth_response = self.waitForResponse(auth_iq_id, timeout=60)\n  File "/usr/lib/python3.6/site-packages/osad/jabber_lib.py", line 1212, in waitForResponse\n    raise JabberQualifiedError(self.lastErrCode, self.lastErr)\nosad.jabber_lib.JabberQualifiedError: <JabberQualifiedError instance at 140161942566184; errcode=401; err=>\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/usr/lib/python3.6/site-packages/osad/jabber_lib.py", line 124, in main\n    c = self.setup_connection(no_fork=no_fork)\n  File "/usr/lib/python3.6/site-packages/osad/jabber_lib.py", line 302, in setup_connection\n    resource=self._resource)\n  File "/usr/lib/python3.6/site-packages/osad/dispatcher_client.py", line 36, in start\n    self.auth(username, password, resource)\n  File "/usr/lib/python3.6/site-packages/osad/jabber_lib.py", line 896, in auth\n    self.register(username, password)\n  File "/usr/lib/python3.6/site-packages/osad/jabber_lib.py", line 1142, in register\n    self.sendRegInfo()\n  File "/usr/lib/python3.6/site-packages/jabber/jabber.py", line 669, in sendRegInfo\n    return self.SendAndWaitForResponse(reg_iq)\n  File "/usr/lib/python3.6/site-packages/jabber/jabber.py", line 419, in SendAndWaitForResponse\n    return self.waitForResponse(ID,timeout)\n  File "/usr/lib/python3.6/site-packages/osad/jabber_lib.py", line 1212, in waitForResponse\n    raise JabberQualifiedError(self.lastErrCode, self.lastErr)\nosad.jabber_lib.JabberQualifiedError: <JabberQualifiedError instance at 140161942568200; errcode=500; err=>\n')

Actions stuck, cannot delete systems

Hi, Somehow ended up with actions getting stuck and cannot delete systems.

Package lists for systems don't show in the Web UI

spacecmd {SSM:0}> system_list
system1.domain.co.uk : 1000010000
system2.domain.co.uk : 1000010003
system5.domain.co.uk : 1000010001
spacecmd {SSM:0}> system_delete -c 'FORCE_DELETE' system5.domain.co.uk
Profile                   System ID
------------------------  ---------
system5.domain.co.uk  1000010001

Delete these systems [y/N]: y
INFO: 1 system(s) scheduled for removal
spacecmd {SSM:0}> system_list
system1.domain.co.uk : 1000010000
system2.domain.co.uk : 1000010003
system5.domain.co.uk : 1000010001
spacecmd {SSM:0}> schedule_listpending
ID      Date                 C    F    P     Action
--      ----                ---  ---  ---    ------
32      20190404T10:16:04     0    0    1    Package List Refresh scheduled by (unknown)
31      20190404T10:15:55     0    0    1    Package List Refresh scheduled by (unknown)
30      20190404T10:15:51     0    0    1    Package List Refresh scheduled by (unknown)
29      20190404T09:33:16     0    0    1    Package List Refresh scheduled by (unknown)
27      20190404T09:26:56     0    0    1    Package List Refresh scheduled by (unknown)
26      20190404T09:24:25     0    0    1    Package List Refresh scheduled by (unknown)
24      20190404T09:23:30     0    0    1    Package List Refresh scheduled by (unknown)
21      20190403T17:56:40     0    0    1    Package List Refresh scheduled by (unknown)
20      20190403T17:56:33     0    0    1    Apply highstate scheduled by (unknown)
19      20190403T17:56:31     0    0    1    Apply states [certs, channels, channels.disablelocalrepos, packages, services.salt-minion] scheduled by (unknown)
18      20190403T17:56:24     0    0    1    Hardware List Refresh scheduled by (unknown)
12      20190403T16:57:45     0    0    1    Package List Refresh scheduled by (unknown)
11      20190403T16:57:43     0    0    1    Package List Refresh scheduled by (unknown)
spacecmd {SSM:0}> schedule_details 11
ID:        11
Action:    Package List Refresh scheduled by (unknown)
User:      None
Date:      20190403T16:57:43

Completed:   0
Failed:      0
Pending:     1

Pending Systems
---------------
system5.domain.co.uk
spacecmd {SSM:0}> schedule_cancel 11
ERROR: redstone.xmlrpc.XmlRpcFault: unhandled internal exception: Cannot cancel actions in PICKED UP state, aborting...

From rhn_taskomatic_daemon.log

INFO | jvm 1 | 2019/04/04 09:00:00 | 2019-04-04 09:00:00,524 [DefaultQuartzScheduler_Worker-5] ERROR com.suse.manager.webui.utils.MinionActionUtils - JsonParsingError("Minion did not return. [No response]", Expected BEGIN_ARRAY but was STRING at path $)

4GB of system memory are not enough to sync openSUSE Leap 15.0

After talking to a user (Barre) at #uyuni IRC channel this weekend, we discovered that spacewalk-repo-sync is unable to sync Leap 15.0 with only 4GB of RAM available.

From dmesg:

[dom mar 17 04:14:18 2019] Out of memory: Kill process 10121 (spacewalk-repo-) score 257 or sacrifice child
[dom mar 17 04:14:18 2019] Killed process 10121 (spacewalk-repo-) total-vm:2173548kB, anon-rss:1448532kB, file-rss:496kB, shmem-rss:4kB

And despite the time is not consistent:

INFO   | jvm 1    | 2019/03/17 01:51:34 | 2019-03-17 01:51:34,088 [Thread-42] WARN  com.redhat.rhn.taskomatic.core.SchedulerKernel - Number of interrupted runs: 1
STATUS | wrapper  | 2019/03/17 01:51:56 | TERM trapped.  Shutting down.
STATUS | wrapper  | 2019/03/17 01:51:57 | <-- Wrapper Stopped
STATUS | wrapper  | 2019/03/17 01:52:31 | --> Wrapper Started as Daemon
STATUS | wrapper  | 2019/03/17 01:52:31 | Java Service Wrapper Community Edition 64-bit 3.5.32
STATUS | wrapper  | 2019/03/17 01:52:31 |   Copyright (C) 1999-2017 Tanuki Software, Ltd. All Rights Reserved.
STATUS | wrapper  | 2019/03/17 01:52:31 |     http://wrapper.tanukisoftware.com
STATUS | wrapper  | 2019/03/17 01:52:31 | 
STATUS | wrapper  | 2019/03/17 01:52:32 | Launching a JVM...
INFO   | jvm 1    | 2019/03/17 01:52:32 | WrapperManager: Initializing...
INFO   | jvm 1    | 2019/03/17 01:52:34 | Mar 17, 2019 1:52:33 AM com.mchange.v2.log.MLog$1 run
INFO   | jvm 1    | 2019/03/17 01:52:34 | INFO: MLog clients using java 1.4+ standard logging.
INFO   | jvm 1    | 2019/03/17 01:52:34 | Mar 17, 2019 1:52:34 AM com.mchange.v2.c3p0.C3P0Registry banner
INFO   | jvm 1    | 2019/03/17 01:52:34 | INFO: Initializing c3p0-0.9.5.2 [built 28-June-2018 11:17:26 +0000; debug? false; trace: 5]
INFO   | jvm 1    | 2019/03/17 01:52:39 | Mar 17, 2019 1:52:39 AM com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource getPoolManager
INFO   | jvm 1    | 2019/03/17 01:52:39 | INFO: Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource@9d04fe7b [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@f559513f [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acq
uireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> com.redhat.rhn.common.db.RhnConnectionCustomizer, connectionTesterClassName -> com.mchange.v2.c3p0.im
pl.DefaultConnectionTester, contextClassLoaderSource -> caller, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, identityToken -> z8kflta1121ta0113a8ysh|2b1d43f0, i
dleConnectionTestPeriod -> 300, initialPoolSize -> 5, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 300, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 20, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, nestedDataSource 
-> com.mchange.v2.c3p0.DriverManagerDataSource@e41d3f00 [ description -> null, driverClass -> null, factoryClassLocation -> null, forceUseNamedDriverClass -> false, identityToken -> z8kflta1121ta0113a8ysh|f6bca03, jdbcUrl -> jdbc:postgresql://localhost:5432/uyuni, proper
ties -> {user=******, password=******, driver_proto=jdbc:postgresql} ], preferredTestQuery -> select 'c3p0 ping' from dual, privilegeSpawnedThreads -> false, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionO
nCheckout -> true, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> null, extensions -> {}, factoryClassLocation -> null, identityToken -> z8kflta1121ta0113a8ysh|573c5d3b, numHelperThreads -> 3 ]
INFO   | jvm 1    | 2019/03/17 01:52:41 | 2019-03-17 01:52:41,731 [Thread-42] WARN  org.hibernate.cfg.AnnotationBinder - HHH000457: Joined inheritance hierarchy [com.redhat.rhn.domain.image.ImageProfile] defined explicit @DiscriminatorColumn.  Legacy Hibernate behavior w
as to ignore the @DiscriminatorColumn.  However, as part of issue HHH-6911 we now apply the explicit @DiscriminatorColumn.  If you would prefer the legacy behavior, enable the `hibernate.discriminator.ignore_explicit_for_joined` setting (hibernate.discriminator.ignore_ex
plicit_for_joined=true)
INFO   | jvm 1    | 2019/03/17 01:52:49 | Mar 17, 2019 1:52:49 AM com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource getPoolManager
INFO   | jvm 1    | 2019/03/17 01:52:49 | INFO: Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFa
ilure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, dataSourceName -> z8kflta1121ta0113a8ysh|13c3bb4, debugUnreturnedConnectionStac
kTraces -> false, description -> null, driverClass -> org.postgresql.Driver, extensions -> {}, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, forceUseNamedDriverClass -> false, identityToken -> z8kflta1121ta01
13a8ysh|13c3bb4, idleConnectionTestPeriod -> 0, initialPoolSize -> 3, jdbcUrl -> jdbc:postgresql://localhost:5432/uyuni, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 10, maxStatements -> 0, max
StatementsPerConnection -> 120, minPoolSize -> 1, numHelperThreads -> 3, preferredTestQuery -> null, privilegeSpawnedThreads -> false, properties -> {user=******, password=******}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin ->
 false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, userOverrides -> {}, usesTraditionalReflectiveProxies -> false ]
INFO   | jvm 1    | 2019/03/17 01:52:51 | 2019-03-17 01:52:51,136 [Thread-42] WARN  com.redhat.rhn.taskomatic.core.SchedulerKernel - Number of interrupted runs: 2
INFO   | jvm 1    | 2019/03/17 01:58:00 | 2019-03-17 01:58:00,182 [DefaultQuartzScheduler_Worker-18] INFO  com.redhat.rhn.taskomatic.task.ChannelRepodata - In the queue: 5

Anyway, this doesn't seem to be the taskomatic issue we documented at SUSE Manager release notes:

Channels with large number of packages
Some channels, like SUSE Linux Enterprise Server with Expanded Support or Redhat Enterprise Linux, come with an enormous number of packages.

If you have channels with a large number of packages added to SUSE Manager, taskomatic might run out of memory.

In this case it’s recommended to increase the maximum amount of memory allowed for taskomatic by editing /etc/rhn/rhn.conf and adding

taskomatic.java.maxmemory=4096
to this file.

A restart of taskomatic is needed after this change.

The number 4096 gives 4 GB of memory to taskomatic (up from the default 2GB) and should be raised even higher if taskomatic still runs out of memory.

Keep in mind this will affect to the total memory required by SUSE Manager Server.

In this case the problem goes away by just increasing system memory to 8GB, without adjusting taskomatic config.

So I wonder if we shouldn't make a quick fix for 4.0.1 release notes (and maybe doc?) to increase the minimum to 8GB, until 4.0.2 is out (master branch already requires 8GB).

Salt formulas development - Pillar data

I have found some issues while developing the Salt formulas forms.

The thing is that all of the information used in the forms will be automatically set in the pillar files used by the formula. If the form input is not set it will put the pillar entry with an empty value. This is quite annoying when some of the inputs are optional.

I would like to have other feature similar than visibleIf to manage that.
It could be for example enableIf. The usage would be exactly the same as in visibleIf but in this case if the statement is false, the entry (or entry block below this parameter) would not be added to the pillar data.

Remember that one of the main usages of the pillar parameters is to check if a parameter exist or not to perform different action. Most of the the times this is used in salt like

if paramater is defined

So, to remove this parameter is important.

Right now we have mitigated this problem doing some pre-validation in our salt formulas, here an example:
https://github.com/SUSE/saphanabootstrap-formula/blob/master/hana/pre_validation.sls

I don't like this option but at least right it is working

move pipelines to uyuni and make monolitic reo

I think we should move https://github.com/SUSE/susemanager-ci pipelines inside uyuni repo.

In longterm this is an architecture problem that Uyuni need to take .

Should we put all the jenkins pipelines inside a special directory of uyuni/ci (1 big mono repo) or should we have 2 diff repos.

I am in favor or monolitic big repo like google does. So we have everything there. I think that is the direction. We already discussed this in a retrospective in the past and this was the shared vision more
or less.

spacewalk-repo-sync within master broken for Debian repos when using a proxy

Changes introduced by 5e5e772 breaks download of packages from Debian repositories, as the URL for the proxy gets 2 http:// strings.

So given a proxy within rhn.conf as

server.satellite.http_proxy = myproxy:8080

results in a proxy url set to

http://http://myproxy:8080

get_proxy already adds the http:// to the url in

url = 'http://%s' % CFG.http_proxy

In

'http' : 'http://'+self.proxy,

http:// is prepended again.

Leap 15

Is there any estimate on when Leap 15 will be supported, as 42.3 is scheduled to go EOL at the end of the month.

I'm ok with installing on 42.3 initially as a test but I wouldn't want to deploy on 42.3 if I could avoid it.

Java traceback Caused by: java.io.IOException: Failed to send AJP message

Moving from the mailing list to here as well. It appears on SUMA as this bug as well

2019-04-18 14:22:01,829 [ajp-apr-0:0:0:0:0:0:0:1-8009-exec-10] ERROR com.redhat.rhn.frontend.servlets.SessionFilter - Error during transaction. Rolling back
org.apache.catalina.connector.ClientAbortException: java.io.IOException: Failed to send AJP message
    at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:396)
    at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:315)
    at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:421)
    at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:409)
    at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:97)
    at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:90)
    at spark.serialization.DefaultSerializer.process(DefaultSerializer.java:38)
    at spark.serialization.Serializer.processElement(Serializer.java:49)
    at spark.serialization.Serializer.processElement(Serializer.java:52)
    at spark.serialization.Serializer.processElement(Serializer.java:52)
    at spark.serialization.SerializerChain.process(SerializerChain.java:53)
    at spark.http.matching.Body.serializeTo(Body.java:72)
    at spark.http.matching.MatcherFilter.doFilter(MatcherFilter.java:189)
    at spark.servlet.SparkFilter.doFilter(SparkFilter.java:173)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
    at com.redhat.rhn.frontend.servlets.AuthFilter.doFilter(AuthFilter.java:98)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
    at com.opensymphony.module.sitemesh.filter.PageFilter.parsePage(PageFilter.java:142)
    at com.opensymphony.module.sitemesh.filter.PageFilter.doFilter(PageFilter.java:58)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
    at com.redhat.rhn.frontend.servlets.LocalizedEnvironmentFilter.doFilter(LocalizedEnvironmentFilter.java:71)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
    at com.redhat.rhn.frontend.servlets.EnvironmentFilter.doFilter(EnvironmentFilter.java:103)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
    at com.redhat.rhn.frontend.servlets.SessionFilter.doFilter(SessionFilter.java:55)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
    at com.redhat.rhn.frontend.servlets.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:97)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80)
    at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502)
    at org.apache.coyote.ajp.AbstractAjpProcessor.process(AbstractAjpProcessor.java:870)
    at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684)
    at org.apache.tomcat.util.net.AprEndpoint$SocketWithOptionsProcessor.run(AprEndpoint.java:2464)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Failed to send AJP message
    at org.apache.coyote.ajp.AjpAprProcessor.output(AjpAprProcessor.java:125)
    at org.apache.coyote.ajp.AbstractAjpProcessor.writeResponseMessage(AbstractAjpProcessor.java:1698)
    at org.apache.coyote.ajp.AbstractAjpProcessor.writeData(AbstractAjpProcessor.java:1617)
    at org.apache.coyote.ajp.AbstractAjpProcessor.access$300(AbstractAjpProcessor.java:61)
    at org.apache.coyote.ajp.AbstractAjpProcessor$SocketOutputBuffer.doWrite(AbstractAjpProcessor.java:1765)
    at org.apache.coyote.Response.doWrite(Response.java:491)
    at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:391)
    ... 49 more

Looks related to the products page or by calling mgr-sync.

Create / Refresh buttons consistency

In the Images List page, the buttons ordered: Import, Refresh, while on other pages like the Profiles one, the order is Refresh, Create. It would be great to stay constistent on the place of the Refresh button in those lists.

GC overhead limit exceeded

Uyuni 4.0.1 stable on openSUSE 42.3

Running in a VM with:
4 vCPUs
32GB RAM

When left running for a few days the tomcat service falls over. I see this error:

journalctl -u tomcat
Apr 13 20:47:39 uyuni server[1785]: Exception in thread "ajp-apr-0:0:0:0:0:0:0:1-8009-exec-3533" Exception in thread "ajp-apr-127.0.0.1-8009-
Apr 13 20:47:39 uyuni server[1785]: Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "ajp-apr-127.0.0
Apr 13 21:43:16 uyuni server[1785]: Exception in thread "com%002eredhat%002erhn%002edomain%002erhnpackage%002e%0050ackage%004bey%0054ype.data
Apr 13 21:43:16 uyuni server[1785]: Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "I/O dispatcher 
Apr 14 00:34:43 uyuni server[1785]: Exception in thread "ajp-apr-127.0.0.1-8009-exec-9" Exception in thread "pool-2-thread-1" Exception in th
Apr 14 01:29:07 uyuni server[1785]: Exception in thread "WebSocket background processing" Exception in thread "ajp-apr-127.0.0.1-8009-exec-11
Apr 14 01:29:07 uyuni server[1785]: Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "ajp-apr-127.0.0
Apr 14 01:46:44 uyuni server[1785]: Exception in thread "ajp-apr-127.0.0.1-8009-exec-5" Exception in thread "C3P0PooledConnectionPoolManager[
Apr 14 01:46:44 uyuni server[1785]: Exception in thread "salt-event-connection-watchdog" java.lang.OutOfMemoryError: GC overhead limit exceed
Apr 14 01:46:44 uyuni server[1785]: java.lang.OutOfMemoryError: GC overhead limit exceeded

mgr-websockify.service fails to start

Using master packages (Leap 15.0 as base OS):

● mgr-websockify.service - TCP to WebSocket proxy
   Loaded: loaded (/usr/lib/systemd/system/mgr-websockify.service; static; vendor preset: disabled)
   Active: failed (Result: exit-code) since Fri 2019-05-10 11:25:51 CEST; 2min 32s ago
  Process: 8831 ExecStart=/usr/bin/websockify --token-plugin JWTTokenApi --token-source /etc/rhn/websockify.key --cert /etc/apache2/ssl.crt/server.crt --key /etc/pki/tls/private/spacewalk.key 8050 (code=exited, status=1/FAILURE)
  Process: 8811 ExecStartPre=/usr/bin/sh -c grep secret_key /etc/rhn/rhn.conf | tr -d ' ' | cut -f2 -d '=' | perl -ne 's/([0-9a-f]{2})/print chr hex $1/gie' > /etc/rhn/websockify.key (code=exited, status=0/SUCCESS)
 Main PID: 8831 (code=exited, status=1/FAILURE)

It is not a problem with the certificates and keys:

uyuni-server:~ # ls /etc/rhn/websockify.key
/etc/rhn/websockify.key
uyuni-server:~ # ls /etc/apache2/ssl.crt/server.crt
/etc/apache2/ssl.crt/server.crt
uyuni-server:~ # ls /etc/pki/tls/private/spacewalk.key
/etc/pki/tls/private/spacewalk.key

But a problem with the python code:

uyuni-server:~ # /usr/bin/websockify --token-plugin JWTTokenApi --token-source /etc/rhn/websockify.key --cert /etc/apache2/ssl.crt/server.crt --key /etc/pki/tls/private/spacewalk.key 8050
Traceback (most recent call last):
  File "/usr/bin/websockify", line 11, in <module>
    load_entry_point('websockify==0.8.0', 'console_scripts', 'websockify')()
  File "/usr/lib/python3.6/site-packages/websockify/websocketproxy.py", line 511, in websockify_init
    token_plugin_cls = getattr(sys.modules[token_plugin_module], token_plugin_cls)
AttributeError: module 'websockify.token_plugins' has no attribute 'JWTTokenApi'

I can't reproduce when the base is SLE15SP1 and we use the packages from Devel:Galaxy:Manager:Head:

   Loaded: loaded (/usr/lib/systemd/system/mgr-websockify.service; static; vendor preset: disabled)
   Active: active (running) since Thu 2019-05-09 23:58:45 CEST; 11h ago
 Main PID: 7770 (websockify)
    Tasks: 2 (limit: 4915)
   CGroup: /system.slice/mgr-websockify.service
           └─7770 /usr/bin/python3 /usr/bin/websockify --token-plugin JWTTokenApi --token-source /etc/rhn/websockify.key --cert /etc/apache2/ssl.crt/server.crt --key /etc/pki/tls/private/spacewalk.key 8050

CentOS6/7 gets detected as RHEL Expanded Support 6/7

screenshot_20181210_151258
screenshot_20181210_161639

How to reproduce:

  1. Create the CentOS6/7 channels with spacewalk-common-channels
  2. Create bootstrap repositories for CentOS6/7.
  3. Create bootstrap scripts for CentOS6/7 (in my case it happened with traditional).
  4. Bootstrap CentOS6/7 clients.
  5. Look at the products installed.

New minions not added to systems

Uyuni 4.0.1

A new minion is bootstrapped using bootstrap.sh

It joins Salt master and can I find the key using salt-key -L

But the minion never appears in Systems > Systems List

Viewing the Salt message bus I see

salt/job/20190411153502494691/new       {
    "_stamp": "2019-04-11T14:35:02.495370", 
    "arg": [], 
    "fun": "mgractionchains.resume", 
    "jid": "20190411153502494691", 
    "minions": [
        "minion.domain.co.uk"
    ], 
    "missing": [], 
    "tgt": "minion.domain.co.uk", 
    "tgt_type": "glob", 
    "user": "salt"
}

salt/job/20190411153502494691/ret/minion.domain.co.uk        {
    "_stamp": "2019-04-11T14:35:06.850239", 
    "cmd": "_return", 
    "fun": "mgractionchains.resume", 
    "fun_args": [], 
    "id": "minion.domain.co.uk", 
    "jid": "20190411153502494691", 
    "metadata": {
        "suma-action-chain": true
    }, 
    "out": "nested", 
    "retcode": 254, 
    "return": "'mgractionchains.resume' is not available.", 
    "success": false
}

And from the minion if I do a salt-call

minion.domain.co.uk:~ # salt-call state.apply 
[WARNING ] /var/cache/salt/minion/extmods/grains/__pycache__/cpuinfo.cpython-36.py:20: DeprecationWarning: Use of 'salt.utils.which_bin' detected. This function has been moved to 'salt.utils.path.which_bin' as of Salt 2018.3.0. This warning will be removed in Salt Neon.

[ERROR   ] Error encountered during module reload. Modules were not reloaded.
[WARNING ] /var/cache/salt/minion/extmods/modules/udevdb.py:24: DeprecationWarning: Use of 'salt.utils.which_bin' detected. This function has been moved to 'salt.utils.path.which_bin' as of Salt 2018.3.0. This warning will be removed in Salt Neon.

[ERROR   ] Template was specified incorrectly: False
[ERROR   ] Template was specified incorrectly: False
local:
    Data failed to compile:
----------
    Specified SLS packages.packages_7a5b9695fb48e3a146f6b8d35caf5e2a in saltenv base is not available on the salt master or through a configured fileserver
----------
    Specified SLS custom.custom_7a5b9695fb48e3a146f6b8d35caf5e2a in saltenv base is not available on the salt master or through a configured fileserver

Inconsistency between filesystem and Packages.gz filename for Debian / Ubuntu repos for Salt clients

I'm running Spacewalk since 2.1 and currently using SW 2.7 with Debian 8/9 clients.

Now I'm trying to migrate the SW server to Uyuni and also get the power of Salt into our infrastructure.

I'm currently testing with a modified version of apt-transport-spacewalk (with modified apt method) to get Debian clients to download the packages via curl (and token appended). This works for the metadata (e.g. Packages.gz) like a charm now. But I'm unable to download the packages from Uyuni. And now I found the error.

The name of the .deb files within the Packages.gz is the name + '_' + version. But the files on the filesystem are separated with a normal hyphon '-', thus I always get a 404 HTTP error.

Example: Within the web interface, I have a db-util-5.3.1-X.all-deb package within a channel. The path within the web interface is shown as packages/1/c7c/db-util/5.3.1-X/all-deb/c7c71323bd77968b259a22e98218910c99da739e41b972af48beb3818e02d1cd/db-util-5.3.1-X.all-deb.deb.

But the filename within the Packages.gz is shown as

Filename: XMLRPC/GET-REQ/stretch_main_main/getPackage/db-util_5.3.1-X.all-deb.deb

If I replace with '_' with '-' between the name and version, the download works as expected. But this seems to work with a traditional client without a problem. Only the salt clients (with that direct curl request) seem to have this problem

But because the _ is the default separation character for name and version, this should be kept within the "Packages.gz" but the filename on the filesystem (and possibly the name of the link within the web interface) should be modified (to show the '_' character).

Also, when I tried to download packages with this path XMLRPC/GET-REQ, I get a 405 HTTP error. Thus, I also rewrite the target request and replace XMLRPC/GET-REQ/ with rhn/manager/download/.

So the final requests look like https://<server>/rhn/manager/download/<channel>/getPackage/db-util-....?<token>

remove global vars in ruby codebase

Description

We have some global vars in ruby codebase.

The goal of this issue would be to find them out, and check if they are really needed and replace them with local ones.

Support additional headers for deb packages

According to @rpasche's mail to uyuni-devel:

Several metadata headers are missing for Debian packages, that causes
major issues on Debian / Ubuntu systems.

One header is the "Multi-Arch" header, that is missing. This missing
header causes packages to be installable over and over again. "apt" does
not correctly "get it", that this package is already the newest
installed and suggests these packages when searching for updates all the
time.

My initial impression is that we have to extend spacewalk-repo-sync to take additional headers into account when syncing each package.
Some changes to the database might also be needed in order to store these additional packages fields.

Require parameters in SUMA salt formulas frontend

I would like to request a small new feature or improvement in the SUMA salt formulas usage.

Right now, when we are in some salt formula customization window and we are setting the parameters values, we can leave some entry empty even though this parameter is not optional.

In my opinion, if a parameter is not optional and we don't set any value, the form should complain when we try to save it (saying this value is required and hovering in red color the borders of the input box for example).

Otherwise I notice that the not optional parameters are set with empty values and this is not the ideal behaviour in my opinion.

What do you think?

spacewalk-schema-upgrade broken in master (Leap15)

Although it might not be the supported way, I am trying to upgrade my uyuni testserver from Leap42.3 to Leap15.0, so I can further try to test the new Ubuntu / Debian support.

Anyway, the spacewalk-schema-upgrade seems to be broken. But it's because of the package names (specially the release) of the package.

The lp within the release breaks the regex ^(.+-\d+(\.\d+)*)(\..*)*$ to find the target_schema. On my testsystem, this looks like this

uyuni-test:/usr/bin # perl /usr/bin/spacewalk-schema-upgrade
Schema upgrade: [susemanager-schema-4.0.3-1.1] -> [susemanager-schema-4.0.10-lp150.1.3]
Searching for upgrade path to: [susemanager-schema-4.0]
Searching for upgrade path to: [susemanager-schema]
Was not able to find upgrade path in directory [/etc/sysconfig/rhn/schema-upgrade].
uyuni-test:/usr/bin #

It is not looking for upgrade path to susemanager-schema-4.0.10 as it should.

Salt formulas execution order in SUMA

Hi all,

After working a bit with SUMA and some salt formulas I have noticed some potential improvements that might make the usage and even the development for salt formulas easier. Here one of them:

Give the option to order the execution of the formulas. I know that the formulas should be independent between them, but in some cases this doesn't happen. Right now the execution is done in alphabetical order (in our case, cluster first and then hana). I would like to have the option to decide the execution order. I guess that this information is used to adapt some top.sls file, so It would be great to have this feature. Here an example:

suma11

As exposed before, this case always will execute cluster first and hana* then.

UnicodeEncodeError when importing SLES11SP4 repo

I have tried to import the regular SLES11 SP4 repo to a repository, and I got this backtrace.

2019/02/22 15:39:55 +02:00   Importing packages to DB:
2019/02/22 15:41:25 +02:00 20.6 %
2019/02/22 15:42:55 +02:00 32.37 %
2019/02/22 15:44:25 +02:00 33.64 %
2019/02/22 15:45:55 +02:00 72.43 %
2019/02/22 15:47:26 +02:00 97.84 %
2019/02/22 15:47:33 +02:00 Importing packages finished.
2019/02/22 15:47:33 +02:00 
2019/02/22 15:47:33 +02:00   Linking packages to the channel.
2019/02/22 15:47:38 +02:00 
2019/02/22 15:47:38 +02:00   Patches in repo: 0.
2019/02/22 15:47:38 +02:00 Unexpected error: <type 'exceptions.UnicodeEncodeError'>
2019/02/22 15:47:38 +02:00 Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/spacewalk/satellite_tools/reposync.py", line 581, in sync
    self.import_products(plugin)
  File "/usr/lib/python2.7/site-packages/spacewalk/satellite_tools/reposync.py", line 1875, in import_products
    products = repo.get_products()
  File "/usr/lib/python2.7/site-packages/spacewalk/satellite_tools/repo_plugins/yum_src.py", line 572, in get_products
    p['description'] = _fix_encoding(product.find('description').text)
  File "/usr/lib/python2.7/site-packages/spacewalk/satellite_tools/repo_plugins/yum_src.py", line 933, in _fix_encoding
    return str(text)
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position 204: ordinal not in range(128)

Versions:

# rpm -qa | grep uyu
uyuni-build-keys-web-12.0.1-4.1.noarch
uyuni-build-keys-12.0.1-4.1.noarch
patterns-uyuni_server-4.0-4.1.x86_64
release-notes-uyuni-4.0.1-2.1.x86_64

YaST2 setup product detection not working at the log

The greeter got fixed at https://github.com/uyuni-project/uyuni/blob/master/susemanager/yast/susemanager_congratulate.rb#L19

However at the log the following is present:

Visit https://uyuni-server.suse.inet to create the Spacewalk administrator account.

I think it's because of https://github.com/uyuni-project/uyuni/blob/master/spacewalk/setup/bin/spacewalk-setup#L57

Should we maybe change it to work in the same way susemanager_congratulate.rb does?

@mcalmer what's your opinion?

Get Debian clients registered as Salt clients into Uyuni

This is some kind of general issue to collect the current problems getting Debian systems integrated as Salt clients into Uyuni.

I will start to list the first points I recognized until now and add some further information later as I'm able to collect all needed information. Nearly all modifications (for my tests so far) have been done on the Debian test client (despite the modifications of some salt states with /usr/share/susemanager/salt/)

  1. New apt method needed to download deb packages via HTTP request (and token). See also #252
  2. Jinja template rendering error within bootstrap.remove_traditional_stack.sls. This seems to be caused by repos.items() where the data is an array of type dict instead of dict only as in Suse or RedHat.
  3. pkg.info_installed in Debian does not support to set attr to be returned by the function. Therefore you get an exception, that these arguments are not supported.
  4. Also, the install_date is missing for many debian packages (and therefore install_date_time_t even after modification of the clients salt modules). See saltstack/salt#50318
  5. After registering a Debian system (after slight modifications of the base salt states within Uyuni), the packages.profileupdate does not seem to trigger a real package update within the database. Within the salt master logfile (running in debug mode), several errors parsing the IPv4 address are reported.

As said, I will add further information.

Add VNC display page for all clients

Virtual Machines will naturally get a VNC display page -- see PR #417 for this.
This feature should be extended to all other clients. This requires installing a VNC server on the physical machines.

Unbelivable fact: Country Serbia exists

after just maybe 13 years... in most setups we do not have this country called Serbia but hey we do have country called Yugoslavia!

 Error output
 ┌────────────────────────────────────────────────────────────────────────────────
 │The answer 'RS' provided for 'Country code (Examples: "US", "JP", "IN", or type "?" to see a list)' is invalid.   
 │ERROR: spacewalk-setup failed

who manages to find Yugoslavia on (normal modern) map will have a free beer from me...
https://en.wikipedia.org/wiki/Serbia <--- This DOES exist
https://en.wikipedia.org/wiki/Yugoslavia <--- This DOES NOT exist

Please it is really not fun after 10+ year i still can't find it in systems ...
Yes i know it is not only Uyuni fault but it is not funny to live in Yugoslavia to be able to use software package from 2019...

Fun Fact: 20 years from bombing of Yugoslavia which looks you like to be reminded of...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.