Giter Club home page Giter Club logo

doc-slesforsap's Introduction

SUSE Linux Enterprise Server for SAP Applications Documentation

This is the source for the official SUSE Linux Enterprise Server for SAP Applications documentation.

This repository hosts the documentation sources including translations (if available).

Released versions of the documentation are published at https://documentation.suse.com/sles-sap/.

Branches

Table 1. Overview of important branches
Name Purpose

main

Current working branch

maintenance/15_SP4

Maintenance branch for SLES-SAP 15 SP4

maintenance/15_SP3

Maintenance branch for SLES-SAP 15 SP3

maintenance/15_SP2

Maintenance branch for SLES-SAP 15 SP2

maintenance/15_SP1

Maintenance branch for SLES-SAP 15 SP1

maintenance/15_GA

Maintenance branch for SLES-SAP 15 GA

maintenance/12_SP5

Maintenance branch for SLES-SAP 12 SP5

maintenance/12_SP4

Maintenance branch for SLES-SAP 12 SP4

maintenance/12_SP3

Maintenance branch for SLES-SAP 12 SP3

maintenance/12_SP2

Maintenance branch for SLES-SAP 12 SP2

maintenance/12_SP1

Maintenance branch for SLES-SAP 12 SP1

maintenance/12_GA

Maintenance branch for SLES-SAP 12 GA

maintenance/11_SP4

Maintenance branch for SLES-SAP 11 SP4

On Feb 20, 2021, we switched to a new default branch. The default branch is now called main.

Use the main branch as the basis of your commits/of new feature branches.

How to update your local repository

If you created a local clone or GitHub fork of this repo before Feb 20, 2021, do the following:

git branch -m master main git fetch origin git branch -u origin/main main git pull -r

Contributing

Thank you for contributing to this repo. Please adhere to the following guidelines when creating a pull request:

  1. Make your pull request against the main branch if you are contributing to the most recent release. This branch is is protected.

  2. If you are contributing to a previous release, please see maintenance/<RELEASENUMBER>_. These branches are also protected.

  3. Make sure all validation (Travis CI) checks are passed, and tag relevant SMEs from the development team (if applicable) and members of the SLES-SAP doc team: Thomas Schraitle (@tomschr).

    **NOTE:** If your pull request has multiple files and reorganisation changes, please build locally using DAPS or daps2docker
    (see instructions below) to verify and build the files. Travis CI only validates, and does not ensure the XML builds
    are correct.
  4. Implement any required changes, or fix any merge conflicts if relevant. If you have any questions, ping a documentation team member in #susedoc on RocketChat.

Editing DocBook

To contribute to the documentation, you need to write DocBook.

  • You can learn about DocBook syntax at http://docbook.org/tdg5/en/html .

  • SUSE documents are generally built with DAPS (package daps) and the SUSE XSL Stylesheets (package suse-xsl-stylesheets). Ideally, you should get these from the repository Documentation:Tools. However, slightly older versions are also available from the SLE and openSUSE repositories.

Building documentation

If you are interested in building DAPS documentation (defaulting to HTML and PDF), you can utilize either DAPS directly or use daps2docker. Both tools only work on Linux.

  • Use daps2docker if you use any Linux distribution that includes Docker and Systemd and only want to build HTML, PDF, or EPUB and want to be set up as quickly as possible.

  • Use DAPS directly if you are using a recent version of openSUSE, and want to use any of the advanced features of DAPS, such as building Mobipocket or spell-checking documents.

Using daps2docker

  1. Install Docker

  2. Clone the daps2docker repository from https://github.com/openSUSE/daps2docker.

  3. Within the cloned repository, run ./daps2docker.sh /PATH/TO/DOC-DIR This builds HTML and PDF documents.

Using DAPS directly

  • $ daps -d DC-<YOUR_BOOK> validate: Make sure what you have written is well-formed XML and valid DocBook 5

  • $ daps -d DC-<YOUR_BOOK> pdf: Build a PDF document

  • $ daps -d DC-<YOUR_BOOK> html: Build multi-page HTML document

  • Learn more at https://opensuse.github.io/daps

doc-slesforsap's People

Contributors

abravosuse avatar cjschroder avatar cwickert avatar dariavladykina avatar diegoakechi avatar dmacvicar avatar dmpop avatar fsundermeyer avatar jamesongithub avatar janajaeger avatar jfaltenbacher avatar keichwa avatar simonflood avatar svenseeberg avatar taroth21 avatar tomschr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

doc-slesforsap's Issues

Add section in automation documentation page with the required steps to run the formulas

Hello,

In order to improve the documentation of the automation page, we should add a new section explaining how to use the formulas in a generic way. I have the next in my mind.

Create a new section after all of the formulas are listed. Something like How to use the formulas.
This section would have the next information:

The salt formulas can be used with 2 different approaches: salt master/minion or only salt minion execution.

For the master/minion approach, all of the next steps must be executed in the master machines. If only salt minion option is used, the steps must be executed in all of the minions where the formulas are going to executed.

  1. Install the formulas
    Install the formulas that are going to be applied using zypper

  2. Create the pillar files structure
    The pillar files work as configuration files for the salt formulas, they are the input the user has to give. In order to use the files, the next structure is the most appropriate one (this is only an example):

a. Create the /srv/pillar folder (this might be already in place)
b. Create a top.sls file in /srv/pillar with the next content. This will apply the hana, netweaver, drbd and ha formulas in the nodes specified by the hostname.

base:
    'hana01,hana02':
        - hana.hana
        - hana.cluster

    'netweaver01,netweaver02':
        - netweaver.netweaver
        - netweaver.cluster

    'drbd01,drbd02':
        - drbd.drbd
        - drbd.cluster

c. Create the folders /srv/pillar/hana, /srv/pillar/netweaver and /srv/pillar/drbd for the pillar files (notice that these names match with the names used in the top.sls file.

d. Add the pillar files in the correct folder. hana folder will have hana.sls and cluster.sls, netweaver folder will have netweaver.sls and cluster.sls, and drbd folder will have drbd.sls and cluster.sls.

Find here an example of the contents you can use for each of the formulas:
hana.sls
netwaever.sls
drbd.sls
cluster.sls

The content of the pillar files must be configured depending on the needed configuration of each of the formulas (notice that in this example, 3 different cluster.sls are used, and each of them might be different, as the HA clusters requirements might be different too.

e. Create a top.sls in /srv/salt folder. It follows the same rules as the pillar top.sls file. Here an example (in this case the folder part is removed, as we are pointing to the formulas with their name):

base:
    'hana01,hana02':
        - hana
        - cluster

    'netweaver01,netweaver02':
        - netweaver
        - cluster

    'drbd01,drbd02':
        - drbd
        - cluster
  1. Execute salt for each of the approaches (salt supports more options than those, this is just an example)
    a. For the master/minion approach run:
salt '*' state.highstate

b. For the only minion approach run:

salt-call --local state.highstate

Edit toms:
Related to bsc#1174530

Trento documentation: env variable required for installation of Trento Server

Trento Server installation must be updated as follows:

Step 2 (installing as user root)

Replace
curl -sfL https://get.k3s.io | sh
with
curl -sfL https://get.k3s.io | INSTALL_K3S_SKIP_SELINUX_RPM=true sh

Step 2 (installing as non-root user)
Replace
curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644
with
curl -sfL https://get.k3s.io | INSTALL_K3S_SKIP_SELINUX_RPM=true sh -s - --write-kubeconfig-mode 644

This change is to prevent that users running SLES 15 SP3 run into an issue when installing K3s due to missing container-selinux package.

Trento premium: correct installation steps for server

A new version of Trento premium is available (v0.8.1).

The installation process for the Agent hasn't changed. But the installation process for the server must be adjusted to accommodate the new version:

Please replace:

HELM_EXPERIMENTAL_OCI=1 helm upgrade --install \
   TRENTO_SERVER_HOSTNAME \
   oci://registry.suse.com/trento/trento-server \
   --version 0.2.5 \
   --set trento-runner.image.tag=0.7.1 \
   --set trento-web.image.tag=0.7.1 \
   --set-file trento-runner.privateKey=PRIVATE_SSH_KEY 

with

HELM_EXPERIMENTAL_OCI=1 helm upgrade --install \
   trento-server \
   oci://registry.suse.com/trento/trento-server \
   --version 0.3.5 \
   --set-file trento-runner.privateKey=PRIVATE_SSH_KEY

Trento premium: update sections outdated

The update sections in the Trento Premium documentation still reflect the update process for the community / open-source version. They must be updated and aligned with the Trento Premium installation process.

Particularly:

  • In the case of the Trento Server, the update process should consist of two steps:
    Step 1: export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
    Step 2: HELM_EXPERIMENTAL_OCI=1 helm upgrade --install trento-server oci://registry.suse.com/trento/trento-server --version 0.3.5 --set-file trento-runner.privateKey=PRIVATE_SSH_KEY
  • In the case of the Trento Agent, the update process is identical to the installation process.

Broken whitepaper links in 15 SP2 docs

In art-sol-automation for 15 SP2, there are the following b0rk links:

All of these are probably under docs.suse.com/sbp/ now.

DRBD/NFS Automated Configuration title in automation docs is not really accurate

The section title DRBD/NFS Automated Configuration in the automation documentation page is not really accurate.

The drbd-formula only configures DRBD, not NFS. In order to use NFS, we need to do things on top of it.

The thing is that SUSE recommends to use DRBD with HA to have a high available NFS share. For that, we configure the HA cluster in top of DRBD to create the NFS exports. But this is not anything the drbd-formula does.

Maybe the title is confusing.

If we want to document how to have high available NFS with DRBD, we will need to add more things (explain that we need to use drbd-formula and habootstrap-formula, and after that apply a specific configuration to the cluster to setup as NFS).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.