Giter Club home page Giter Club logo

stork's Introduction

Stork by Fizzed

Maven Central

Java 8 Java 11 Java 17

Linux x64 MacOS x64 Windows x64

Overview

So you've engineered that amazing Java-based application. Then what? Distributing it or getting it into production is your new problem. Fat/uber jar? Vagrant? Docker? Rkt? LXD? Traditional bare metal deploy? There are so many options!

Stork is a collection of lightweight utilities for optimizing your "after-build" workflow by filling in the gap between your Java build system and execution. Using well-tested methods across operating systems, containers, etc. Stork will let you safely and securely run your app in any environment -- be it Docker, Rkt, LXD, or traditional systems. There are 3 main Stork components that you can pick and choose from to help with your app:

  • stork-launcher will generate well-tested, rock solid, secure launcher scripts from a yaml configuration file for either console or daemon/service JVM apps. The generated launchers will run your app the same way regardless of whether running within a container or numerous different operating systems.

  • stork-assembly will assemble your JVM app into a standard, well-defined canonical layout as a tarball ready for universal distribution or deployment. Regardless of whether your user is on Linux, Windows, OSX, *BSD, etc., our tarball will include everything for your user to be happy.

  • stork-deploy will rapidly and securely deploy your assembly via SSH into a versioned directory structure to various operating systems. It will handle restarting daemons, use strict user/group permissions, and verify the deploy worked. Power users can combine with Blaze for even more advanced deploys.

Using Stork to deploy a production Ninja Framework app

Sponsorship & Support

Project by Fizzed, Inc. (Follow on Twitter: @fizzed_inc)

Developing and maintaining opensource projects requires significant time. If you find this project useful or need commercial support, we'd love to chat. Drop us an email at [email protected]

Project sponsors may include the following benefits:

  • Priority support (outside of Github)
  • Feature development & roadmap
  • Priority bug fixes
  • Privately hosted continuous integration tests for their unique edge or use cases

Example

stork-demo-hellod is an example Maven project for a simple Java web application. It demos the stork-launcher and stork-assembly utilities and produces a tarball assembly that can be deployed using stork-deploy. To generate the launchers and assembly, run this from the stork main directory:

mvn package -DskipTests=true

This will generate all launchers, prep the assembly in target/stork, and tarball it up to target/stork-demo-hellod-X.X.X.tar.gz (X.X.X is the version of your project). You can quickly try it out:

cd stork-demo/stork-demo-hellod
target/stork/bin/stork-demo-hellod --run

Or you can deploy it via SSH using stork-deploy:

stork-deploy --assembly target/stork-demo-hellod-X.X.X.tar.gz ssh://host.example.com

Or you can build a Docker image:

docker build -t stork-demo-hellod .
docker run -it stork-demo-hellod

Usage

Command-line

https://github.com/fizzed/stork/releases/download/v3.0.0/stork-3.0.0.tar.gz

Maven plugin

<build>
    <plugins>
        <plugin>
            <groupId>com.fizzed</groupId>
            <artifactId>stork-maven-plugin</artifactId>
            <version>3.0.0</version>
            <!-- configuration / execution (see below) -->
        </plugin>
    </plugins>
</build>

Gradle Plugin (to be released soon)

plugins {
  id "com.fizzed.stork" version "x.x.x"
}
// configuration / execution (see below)

Why not just create my own script?

That's what we used to do with all of our Java apps too. Eventually, you'll have a problem -- we guarantee it. For example, you simply ran java -jar app.jar & in a shell and everything is working. You close your terminal/SSH session and your app is no longer running. Oops, you forgot to detach your app from the terminal. Use systemd? Did you remember to add the -Xrs flag when you launched your java process? Customer needs to run your app on Windows? In that case you have no option but to use some sort of service framework. Or even something simple like java isn't found by your init system, but it works in your shell. Stork launchers solve these common problems.

Why not just a fat/uber jar?

An uber/fat jar is a jar that has all dependencies merged into it. Usually an application consists of more than your jar (such as config files), so you'll still need to package that up. Then how do you run it as a daemon/service? Plus, its becoming more important to cache/retain most of the dependencies that didn't change for faster deploys using Docker or rsync.

Stork launcher

Utility for generating native launchers for Java-based applications across Windows, Linux, Mac, and many other UNIX-like systems.

You simply create a YAML-based config file (that you can check-in to source control) and then you compile/generate it into one or more launchers. These launchers can then be distributed with your final tarball/assembly/package so that your app looks like a native compiled executable.

Features

  • Generate secure launcher scripts for either console or daemon/service JVM apps.
  • Heavily tested across all major operating systems for every release
    • Windows XP+
    • Linux (Ubuntu, Debian, Redhat flavors)
    • Mac OSX
    • FreeBSD
    • OpenBSD
  • Intelligent & automatic JVM detection (e.g. no need to have JAVA_HOME set)
  • Carefully researched, tested, and optimized methods for running daemons/services
    • Windows daemons installed as a service (32 and/or 64-bit daemons supported)
    • Linux/UNIX daemons can either use exec or use NOHUP, detach TTY properly, and do NOT spawn any sort of annoying helper/controller process
    • Execellent SystemD and SysV support
    • Mac OSX daemons integrate seamlessly with launchctl
    • All daemons can easily be run in non-daemon mode (to make debugging simpler)
    • All companion helper scripts are included to get the daemon to start at boot
  • Configurable methods supported for verifying a daemon started -- including useful debug output (e.g. if daemon fails to start, tail the log so the error is printed if an error is encountered).
  • Supports fixed or percentage-based min/max memory at JVM startup
  • Supports launching apps with retaining the working dir of the shell or setting the working directory to the home of app.
  • Sets the working directory of the app without annoyingly changing the working directory of the shell that launched the app (even on Windows).
  • Command-line arguments and/or system properties are seamlessly passed thru to the underlying Java app
  • Runtime debugging using simple LAUNCHER_DEBUG=1 env var before executing binary to see what's going on (e.g. how is the JVM found?)
  • Support for symlinking detected JVM as application name so that Linux/UNIX commands such as TOP/PS make identifying application easier.

Usage

Compiles all launchers in src/main/launchers to target/stork (which will result in target/stork/bin and target/stork/share dirs).

Command-line

stork-launcher -o target/stork src/main/launchers

Maven

<build>
    <plugins>
        <plugin>
            <groupId>com.fizzed</groupId>
            <artifactId>stork-maven-plugin</artifactId>
            <version>3.0.0</version>
            <executions>
                <execution>
                    <id>stork-launcher</id>
                    <goals>
                        <goal>launcher</goal>
                    </goals>
                </execution>
            </executions> 
        </plugin>
        ...
    </plugins>
</build>

Gradle (to be released soon)

  • task name: storkLauncher
storkLaunchers {
    outputDirectory = new File("${project.buildDir}", "stork")
    inputFiles = ["${project.projectDir}/src/main/launchers".toString()]
    launcher {
        name =  "test1"
        displayName = "test1"
        domain = "com.fizzed.stork.test1"
        shortDescription = "desc"
        type = "DAEMON"
        platforms = ["LINUX","MAC_OSX"]
        workingDirMode = "APP_HOME"
        mainClass="class"
    }
    launcher {
            name =  "test2"
            displayName = "test2"
            domain = "com.fizzed.stork.test1"
            shortDescription = "desc"
            type = "DAEMON"
            platforms = ["LINUX","MAC_OSX"]
            workingDirMode = "APP_HOME"
            mainClass="class"
        }
}

To customize, the following properties are supported:

  • outputDirectory: The directory the launcher will compile/generate launchers to. Defaults to ${project.build.directory}/stork

  • inputFiles: An array of input directories or files to compile in a single invocation. Defaults to ${basedir}/src/main/launchers

Configuration file

# Name of application (make sure it has no spaces)
name: "hello-console"

# Domain of application (e.g. your organization such as com.example)
domain: "com.fizzed.stork.sample"

# Display name of application (can have spaces)
display_name: "Hello Console App"

short_description: "Demo console app"

long_description: "Demo of console app for mfizz jtools launcher"

# Type of launcher (CONSOLE or DAEMON)
type: CONSOLE

# Java class to run
main_class: "com.fizzed.stork.sample.HelloConsole"

# Platform launchers to generate (WINDOWS, LINUX, MAC_OSX)
# Linux launcher is suitable for Bourne shells (e.g. Linux/BSD)
platforms: [ WINDOWS, LINUX, MAC_OSX ]

# Working directory for app
#  RETAIN will not change the working directory
#  APP_HOME will change the working directory to the home of the app
#    (where it was intalled) before running the main class
working_dir_mode: RETAIN

# Arguments for application (as though user typed them on command-line)
# These will be added immediately after the main class part of java command
# Users can either entirely override it at runtime with the environment variable
# APP_ARGS or append extra arguments with the EXTRA_APP_ARGS enviornment variable
# or by passing them in on the command-line too.
#app_args: "-c config.yml"

# Arguments to use with the java command (e.g. way to pass -D arguments)
# Users can either entirely override it at runtime with the environment variable
# JAVA_ARGS or append extra arguments with the EXTRA_JAVA_ARGS enviornment variable
# or by passing them in on the command-line too.
#java_args: "-Dtest=foo"

# Minimum version of java required (system will be searched for acceptable jvm)
# Defaults to Java 1.6.
#min_java_version: "1.6"

# Maximum version of java required (system will be searched for acceptable jvm)
# Defaults to empty (all)
#max_java_version: ""

# Min/max fixed memory (measured in MB). Defaults to empty values which allows
# Java to use its own defaults.
#min_java_memory: 30
#max_java_memory: 256

# Min/max memory by percentage of system
#min_java_memory_pct: 10
#max_java_memory_pct: 20

# Try to create a symbolic link to java executable in <app_home>/run with
# the name of "<app_name>-java" so that commands like "ps" will make it
# easier to find your app. Defaults to false.
#symlink_java: true

Overriding launcher environment variables

All launcher scripts are written to allow last-minute or per-environment replacement. As of v2.7.0, let's say you needed to add a few more Java system properties and wanted to execute a daemon launcher named "hellod".

EXTRA_JAVA_ARGS="-Da=1 -Db=2" /opt/hellod/current/bin/hellod --run

If you run hellod as a daemon using SYSV or SystemD init scripts then stork will load environment variables from /etc/default/hellod. You could place this value in there as well as others you need. In /etc/default/hellod:

APP_HOME=/opt/hello/current
EXTRA_JAVA_ARGS="-Da=1 -Db=2"
DB_PASSWORD=mypass

Stork's launcher scripts for daemons will load these environment variables when starting. For variables used by the launcher script (e.g. APP_HOME or EXTRA_JAVA_ARGS), these are overrides. For variables no used (e.g. DB_PASSWORD) these are effectively passed through to the Java process.

Stork assembly

Stages and assembles your application into a canonical stork layout. The following are copied to target/stork/lib using the full groupId-artifactId-version naming format:

  • Your project artifact (if its a jar)
  • Any additional "attached" runtime jar artifacts
  • Your runtime dependencies

Your project basedir conf/, bin/ and share/ directories are then copied to target/stork (will overlay/overwrite any files currently in target/stork). To include launchers as part of your assembly, you will need to include both the assembly and one or more generate goals. Finally, the contents of target/stork are tarballed into ${finalName}.tar.gz with an install prefix of ${finalName} as the root directory of the tarball (so it unpacks correctly)

Usage

Maven

<build>
    <plugins>
        <plugin>
            <groupId>com.fizzed</groupId>
            <artifactId>stork-maven-plugin</artifactId>
            <version>3.0.0</version>
            <executions>
                <execution>
                    <id>stork-assembly</id>
                    <goals>
                        <goal>assembly</goal>
                    </goals>
                </execution>
            </executions> 
        </plugin>
    </plugins>
</build>

Gradle (to be released soon)

  • task name: storkAssembly
storkAssembly {
    stageDirectory = new File("${project.buildDir}", "stork")
    outputFile = project.buildDir
}

What's nice is that target/stork still exists and you are free to directly run anything in target/stork/bin -- since the launcher scripts correctly pick up your relative dependencies. You can quickly run your application as though you had already deployed it to a remote system.

To customize, the following properties are supported:

  • stageDirectory: The directory where assembly contents will be staged to and tarballed from. Defaults to ${project.build.directory}/stork

  • outputDirectory: The directory the final tarball assembly will be output. Defaults to ${project.build.directory}

  • finalName: The final name of the assembly tarball -- as well as the name of the root directory contained within the tarball -- that will contain the contents of stageDirectory. Defaults to ${project.build.finalName}

  • attachArtifacts: If true the .tar.gz archive will be attached as an artifact to the maven build, installed to the local repository and deployed to the remote in the deploy phase. Defaults to false

  • classifier: Classifier used for the attached .tar.gz archive. Only relevant when attachArtifact is set to true. Defaults to no classifier.

Stork deploy

Utility for rapidly deploying a "versioned" install on one or more remote Linux-based systems via SSH. Installs a stork-based assembly tarball into a versioned directory structure on a remote system and handles restarting daemons as needed. The versioned directory structure allows rapid deployment with the ability to revert to a previous version if needed. Power users can combine with Blaze for even more advanced deploys.

Usage

Command-line to traditional remote system

stork-deploy --assembly target/myapp-1.0.0-SNAPSHOT.tar.gz ssh://host.example.com

Command-line to Vagrant

stork-deploy --assembly target/myapp-1.0.0-SNAPSHOT.tar.gz vagrant+ssh://machine-name

Overview

Since this a "SNAPSHOT" version, a timestamp would be generated (such as 20160401-121032 for April 1, 2016 12:10:32) and this application would be installed to:

/opt/myapp/v1.0.0-20160401-121032

A symlink will be created:

/opt/myapp/current -> /opt/myapp/v1.0.0-20160401-121032

Since this application contains one daemon called "hello-server", the daemon would be stopped (if it existed), the upgrade would occur, then the daemon would be installed (if needed) and started back up. The directories described above in the canonical layout as (retained on upgrade) would be moved rather than overwritten. That means during a fresh install, the bin/, lib/, conf/, and share/ directories are installed. On an upgrade install, the bin/, lib, and share/ directories are installed, while conf/ and runtime dirs data/, log/, and run/ directories are moved.

Programmatic deploys using Blaze

You can combine Stork with Blaze to make automating your deployments even simpler. You'll also never need to download stork locally since Blaze will automatically fetch the dependency for you.

Download blaze:

curl -o blaze.jar 'http://repo1.maven.org/maven2/com/fizzed/blaze-lite/0.16.0/blaze-lite-0.16.0.jar'

Create a blaze.conf file:

blaze.dependencies = [
  "com.fizzed:stork-deploy:3.0.0"
]

Create a blaze.java file:

import java.nio.file.Path;
import java.nio.file.Paths;
import com.fizzed.stork.deploy.Assembly;
import com.fizzed.stork.deploy.Assemblys;
import com.fizzed.stork.deploy.DeployOptions;
import com.fizzed.stork.deploy.Deployer;

public class blaze {

    private final Path archiveFile = Paths.get("target/hello-0.0.1-SNAPSHOT.tar.gz");

    @Task("Deploy assembly to staging env")
    public void deploy_stg() throws Exception {
        DeployOptions options = new DeployOptions()
            .user("hello")
            .group("hello");

        try (Assembly assembly = Assemblys.process(archiveFile)) {
            new Deployer().deploy(assembly, options, "ssh://app1");
            new Deployer().deploy(assembly, options, "ssh://app2");
        }
    }
}

Run it

java -jar blaze.jar deploy_stg

More examples

stork-demo-hellod

stork-demo/stork-demo-hellod

To try this demo out, we use Blaze for scripting the build and execution process

java -jar blaze.jar demo_hellod

stork-demo-dropwizard

stork-demo/stork-demo-dropwizard

To try this demo out, we use Blaze for scripting the build and execution process

java -jar blaze.jar demo_dropwizard

By default the server runs on port 8080 and you can then visit the sample in your browser @ http://localhost:8080/

Development & contributing

Please see the development guide for info on building, testing, and eventually contributing to this project.

License

Copyright (C) 2020 Fizzed, Inc.

This work is licensed under the Apache License, Version 2.0. See LICENSE for details.

stork's People

Contributors

ajcamilo avatar bertranda avatar dajester2013 avatar dependabot[bot] avatar gitblit avatar jjlauer avatar lodrantl avatar mikegager avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stork's Issues

Can we implement stork plugin with spring boot application

I am trying to implement stork in my spring boot application. I am passing my main class method in stork configuration file (.yml). Stork files are getting generated. When I am trying to install and start the application. It is saying "noclassdeffounderror". It is not able to find the main class.

undeploy/uninstall

I was testing stork and I did a few deploys to a server. Is there a way to uninstall all or some specific versions?

Add files to conf/

How can I add files to conf/
I tried to create src/main/conf/foo.conf to see if it was packed in tar.gz

Temp dir permissions w/ multiple deploy users

When multiple users do deploys, they are unable to overwrite each other in the /tmp directory. Workround is to manually clean up the /tmp/stork-deploy directory on the target beforehand. Long term fix is to do a better job of having each user clean this after themselves after a deploy and switching to random temp directories.

Multiple Java Args

When we need to pass multiple java arguments, should they be all specified in a single set of quotes, or as a comma separated list in square brackets?

java_args: "-Dtest=foo -Dopt2=abc"

or

java_args: ["-Dtest=foo", "-Dopt2=abc"]

Add platform_configurations section to README

It took me quite a while to figure out how to specify a user and group for the generated systemd service file in the stork.yml launcher definition. I eventually found it in one of the example files, but it doesn't appear to be documented anywhere.

Here's the example I found, but I don't know if this is all of the available options for this section:

platform_configurations:
  LINUX:
    daemon_method: NOHUP
    user: "daemon"
    group: "daemon"
  
  MAC_OSX:
    user: "_daemon"
    group: "_daemon"
  
  WINDOWS:
    daemon_method: JSLWIN

Need a step by step usage manual.

Hi,

I'm trying to figure out how to generate a systemd init script using Stork, but I can't figure out how to use it. Would it be possible to have a step by step manual on how to proceed?

I have akamai-cefconnector, a java program that need to be run as a service (daemon) on Linux.

Folder structure is :

./bin/CEFConnector-1.6.0.jar
./config/[CEFConnector.properties, log4j2.xml]
./lib
./logs

I just need to generate the systemd init script to operationalise it.

Thank you!

install fails on Windows

Environment:

  • java 8
  • jooby 2.8.10
  • Windows 10
  • maven 3.6.3

This error occurred after packaging a Jooby app which uses the jooby-stork plugin
https://github.com/jooby-project/jooby/blob/2.x/pom.xml#L151
which uses stork 3.1.0

Steps:

jooby create test --stork
cd stork
  • in stork.yml
platforms: [ WINDOWS ]
  • package
mvn clean package
  • extract /target/test-1.0.0.stork.zip and run
\bin\test.bat --install

Expected Result:

  • The application is installed and can then be started using the start command.

Actual Result:

bin\test64.exe is not compatible with the version of Windows you're running. 
Check your computer's system information and then contact the software publisher.

Incorporate JSL 0.99w

Not sure what version it was introduced in, but 0.99w supports Windows Registry keys used by Adopt OpenJDK.

Path problem when using --start

I am trying to use stork 2.0.0 in a ninja framework/maven based API project deployed to a staging environment on Ubuntu 14.04 with stork-deploy. When the application is about to start (at deploy or started by hand) I get the following errors:

Starting ContactApi: /opt/ContactAPI/current/bin/ContactApi: 806: /opt/ContactAPI/current/bin/ContactApi: cannot create /opt/ContactAPI/current//opt/ContactAPI/current/log/ContactApi.out: Directory nonexistent   
/opt/ContactAPI/current/bin/ContactApi: 808: /opt/ContactAPI/current/bin/ContactApi: cannot create /opt/ContactAPI/current//opt/ContactAPI/current/log/ContactApi.out: Directory nonexistent   
failed

The application is currently a snapshot version. Running it with "./ContactAPI --run" in the "/opt/ContactAPI/current/bin" directory is working fine. "./ContactAPI --start" in the same directory is showing the behaviour above.

Starting it in "/opt/ContactAPI/current" with "bin/ContactAPI --start" is also working nicely as well as a "bin/ContactAPI --stop".

Starting it with "service ContactAPI start" is showing the same errors.

I have no idea where to search for this problem. Any idea/hint?

Retain option missing on stork-deploy > 2.7.0

Hi Joe,

how are you doing :-)

I started using stork-deploy to deploy our java apps and I really like it. It's so simple. Single dependency is Java. Goodbye ansible ๐Ÿ‘

I found a minor issue with the current version of stork-deploy. The changelog states (for version 2.6.0)

stork-deploy: New retain option to only retain a specified number of past versions after a successful deploy.

but the option is missing when looking in the code. The DeployOptions class has already the field, but the DeployMain class is missing the parameter.

I could provide a PR if you like containing that little fix.

Cheers,
Marvin

No server.out log file

I am using the plugin to package an application. The problem I am having is that it seems to want to create a server.out log file. I really don't need this file. Is there a way of telling the script not to create a server.out file?

Add gradle plugin

It would be good to have this also available as a Gradle plugin :)

downgrade

Sometimes after the new deploy, we find out that new version has a nasty bug and we need to downgrade to the previous version. I'd love to have the option to select the previous version.

Thank you.

Where is the gradle plugin?

Hi,
Migrating from maven to gradle, and tried the plugin isn't found

CONFIGURE FAILED in 1s
Plugin [id: 'com.fizzed.stork', version: '2.8.0'] was not found in any of the following sources:

- Gradle Core Plugins (plugin is not in 'org.gradle' namespace)
- Plugin Repositories (could not resolve plugin artifact 'com.fizzed.stork:com.fizzed.stork.gradle.plugin:2.8.0')
  Searched in the following repositories:
    Gradle Central Plugin Repository

Add support for `--add-opens` JVM option

Starting from JDK 9 we may need to add option --add-opens to the application, but if user has JDK 8 installed it won't launch because JDK 8 is not aware of such option.

Deployment issue

Hi
When deploying I get the following error on my VSTS CI server:
2018-07-25T20:35:17.4860494Z [ERROR] Unable to cleanly deploy 2018-07-25T20:35:17.4874552Z com.fizzed.blaze.ssh.SshException: Auth fail 2018-07-25T20:35:17.4890901Z at com.fizzed.blaze.ssh.impl.JschConnect.tryToUnwrap(JschConnect.java:369) 2018-07-25T20:35:17.4904381Z at com.fizzed.blaze.ssh.impl.JschConnect.doRun(JschConnect.java:356) 2018-07-25T20:35:17.4918699Z at com.fizzed.blaze.ssh.impl.JschConnect.doRun(JschConnect.java:50) 2018-07-25T20:35:17.4931760Z at com.fizzed.blaze.core.Action.runResult(Action.java:33) 2018-07-25T20:35:17.4944694Z at com.fizzed.blaze.core.Action.run(Action.java:39) 2018-07-25T20:35:17.4958020Z at com.fizzed.stork.deploy.Targets.sshConnect(Targets.java:71) 2018-07-25T20:35:17.4971056Z at com.fizzed.stork.deploy.Targets.connect(Targets.java:46) 2018-07-25T20:35:17.4984250Z at com.fizzed.stork.deploy.Deployer.deploy(Deployer.java:47) 2018-07-25T20:35:17.4997774Z at com.fizzed.stork.deploy.DeployMain.run(DeployMain.java:160) 2018-07-25T20:35:17.5010432Z at com.fizzed.stork.deploy.DeployMain.main(DeployMain.java:33) 2018-07-25T20:35:17.5023154Z Caused by: com.jcraft.jsch.JSchException: Auth fail 2018-07-25T20:35:17.5036770Z at com.jcraft.jsch.Session.connect(Session.java:519) 2018-07-25T20:35:17.5049415Z at com.fizzed.blaze.ssh.impl.JschConnect.doRun(JschConnect.java:342) 2018-07-25T20:35:17.5062275Z ... 8 common frames omitted

What can the issue be? I can scp files to the server, so SSH is configured correctly.

stork-fabric-deploy on windows ?

I went more in deep into your project and wanted to test stork-fabric-deploy. I only see a Linux command line tool for this command where other commands have a windows batch equivalent (http://puu.sh/gkq8i/69273c738b.png). I imagine because it use ssh to push etc. Do you think that there is a way to use stork-fabric-deploy from a windows dev environnement to a linux prod test env ?

stork-deploy: Warn on deploy of existing version

If attempting to deploy a production version (e.g. no -SNAPSHOT) at end of a tarball then warn if it already exists on target server. Add a command-line option of --force or -f to force a re-deploy of an existing version.

Failed start on MacOS Catalina

When setting memory by percentage of system, program can't start and report below error:

Unable to detect system memory to set java max memory

It seems that on Catalina below line is incorrect:

local mem_bytes=`sysctl -a 2>/dev/null | grep "hw.memsize" | head -n 1 | awk -F'=' '{print $2}'`

should it be the same as freebsd?

local mem_bytes=`sysctl -a 2>/dev/null | grep "hw.memsize" | head -n 1 | awk -F'[:=]' '{print $2}'`

systemd instead of init

Hello,

I'm very interested in trying this one on my production servers for deploying, however I wonder if it was possible to choose between init and systemd, as my target is RHEL.

I'm using type: CONSOLE, so my service only has :

[Unit]
Description=myapp sample application
After=syslog.target network.target

[Service]
ExecStart=/opt/folder/myapp/bin/myapp

[Install]
WantedBy=multi-user.target

I think we should add Restart=always and of course application must be configure with working_dir_mode: APP_HOME !

Still need to find how to configure for type: DAEMON.

Other things that would be cool:

  • chosse target folder, with default to /opt
  • allow local deploy without sshing to the same server

What do you think ?

Spring Boot demo start failure on macOS Mojave

A Spring Boot demo & stork is attached.
Pack with the command $./mvnw clean package is ok.
The jar file can be run by $java -jar xxx.jar, but the stork script failure to start.
The error info is :

$ ./target/stork/bin/hello-server --run
Error: Could not find or load main class com.example.demo.DemoApplication

Could anyone help me, thank your!
demo.zip

stork-fabric-deploy only work the first call, crash on second one

I tested stork-fabric-deploy from a linux env to a linux env.
stork-fabric-deploy -H [email protected] --assembly target/test-1.0-SNAPSHOT.tar.gz

This work well the first time. The second push from serv1 to serv2 I got an error :

run: mkdir "test-1.0-SNAPSHOT"
run: rsync -at "/opt/test/version-1.0-20150303041435/" "/tmp/test-1.0-SNAPSHOT/" --exclude="conf" --exclude="data" --exclude="log" --exclude="run"
[localhost] local: expect -c 'exp_internal 0; set timeout 20; spawn rsync -avrtc --exclude=conf --delete --force -e "ssh -oStrictHostKeyChecking=no -p 22" work/test-1.0-SNAPSHOT/ [email protected]:/tmp/test-1.0-SNAPSHOT/; expect "?assword:"; send "123456\n"; expect eof'
/bin/sh: 1: expect: not found
Fatal error: local() encountered an error (return code 127) while executing 'expect -c 'exp_internal 0; set timeout 20; spawn rsync -avrtc --exclude=conf --delete --force -e "ssh -oStrictHostKeyChecking=no -p 22" work/test-1.0-SNAPSHOT/ [email protected]:/tmp/test-1.0-SNAPSHOT/; expect "?assword:"; send "123456\n"; expect eof''

The folder is created into the /opt/.. but seems like as there is an existing current symlink it was blocking.
the server is password protected so it ask me for the password when i used the command.

I also noticed things that i don't really agree with :
For example i noticed that you use the /tmp folder to extract the pushed tar but then don't remove it. So it can be easily explore from a hacker for example.
Second point is that the project is store in the /opt. It should be cool if we can configure that.

In my case i am using an environnement that is multi user. By that i mean i am developping an admin panel and push this panel to a server where other users have php file and can easily explore the hard disk.
I try to protect my file from reading and being potentially accessed by others. Having this tmp folder containing all those dangerous information and not cleaned just afraid me.
The /opt folder have the right to be read by other. (even if we can chmod o-rwx on in the project folder).
But this is another problem. The main problem is that stork-fabric-deploy only work the first time i execute the command.

Status of project? No commits in almost an year.

Due to having an issue with the maven plugin with a java 11 target app I realized that there are no commits in master over the last year. Is the project still maintained?

Do you need help with anything?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.