Giter Club home page Giter Club logo

container-experience-kits's Introduction

Intel Container Experience Kits Setup Scripts

Intel Container Experience Kits Setup Scripts provide a simplified mechanism for installing and configuring Kubernetes clusters on Intel Architecture using Ansible.

The software provided here is for reference only and not intended for production environments.

Quickstart guide

NOTE: Instruction provided bellow are prepared for deployment done under root user by default. If you want to do deployment under non-root user then read this file first and then continue with following steps under that non-root user.

  1. Decide which configuration profile you want to use and export environmental variable.

    NOTE: It will be used only to ease execution of the steps listed below.

    • For Kubernetes Basic Infrastructure deployment:

      export PROFILE=basic
    • For Kubernetes Access Edge Infrastructure deployment:

      export PROFILE=access
    • For Kubernetes Edge Ready Infrastructure deployment:

      export PROFILE=base_video_analytics
    • For Kubernetes Regional Data Center Infrastructure deployment:

      export PROFILE=regional_dc
    • For Kubernetes Remote Central Office-Forwarding Configuration deployment:

      export PROFILE=remote_fp
    • For Kubernetes Infrastructure On Customer Premises deployment:

      export PROFILE=on_prem
    • For Kubernetes Infrastructure On Customer Premises for VSS deployment:

      export PROFILE=on_prem_vss
    • For Kubernetes Infrastructure On Customer Premises for AI Box deployment:

      export PROFILE=on_prem_aibox
    • For Kubernetes Infrastructure On Customer Premises for SW-Defined Factory deployment:

      export PROFILE=on_prem_sw_defined_factory
    • For Kubernetes Build-Your-Own Infrastructure deployment:

      export PROFILE=build_your_own
  2. Install python dependencies using one of the following methods

    NOTE: Ensure that at least python3.9 is installed on ansible host

    a) Non-invasive virtual environment using pipenv

    pip3 install pipenv
    pipenv install
    # Then to run and use the environment
    pipenv shell

    b) Non-invasive virtual environment using venv

    python3 -m venv venv
    # Then to activate new virtual environment
    source venv/bin/activate
    # Install dependencies in venv
    pip3 install -r requirements.txt

    c) System wide environment (not recommended)

    pip3 install -r requirements.txt
  3. Install ansible collection dependencies with following command:

    ansible-galaxy install -r collections/requirements.yml
  4. Copy SSH key to all Kubernetes nodes or VM hosts you are going to use.

    ssh-copy-id <user>@<host>
  5. Generate example host_vars, group_vars and inventory files for Intel Container Experience Kits profiles.

    NOTE: It is highly recommended to read this file before profiles generation.

    Architecture and Ethernet Network Adapter type can be auto-discovered:

    make auto-examples HOSTS=X.X.X.X,X.X.X.X USERNAME=<user>

    or specified manually:

    make examples ARCH=<atom,core,icx,**spr**,emr,gnr,ultra> NIC=<fvl,**cvl**>
  6. Copy example inventory file to the project root dir.

    cp examples/k8s/${PROFILE}/inventory.ini .

    or, for VM case:

    cp examples/vm/${PROFILE}/inventory.ini .

    NOTE: For cloud profiles no inventory.ini file is created, as it will be generated during machine provisioning. As a result, step 6 can be skipped.

  7. Update inventory file with your environment details.

    For VM case: update details relevant for vm_host

    NOTE: At this stage you can inspect your target environment by running:

    ansible -i inventory.ini -m setup all > all_system_facts.txt

    In all_system_facts.txt file you will find details about your hardware, operating system and network interfaces, which will help to properly configure Ansible variables in the next steps.

  8. Copy group_vars and host_vars directories to the project root dir.

    cp -r examples/k8s/${PROFILE}/group_vars examples/k8s/${PROFILE}/host_vars .

    or, for VM case:

    cp -r examples/vm/${PROFILE}/group_vars examples/vm/${PROFILE}/host_vars .

    or, for Cloud case:

    cp -r examples/cloud/${PROFILE}/group_vars examples/cloud/${PROFILE}/host_vars .
  9. Update group and host vars to match your desired configuration. Refer to this section for more details.

    NOTE: Please pay special attention to the http_proxy, https_proxy and additional_no_proxy vars if you're behind proxy.

    For VM case:

    • update details relevant for vm_host (e.g.: datalane_interfaces, ...)
    • update VMs definition in host_vars/host-for-vms-1.yml - use that template for the first vm_host
    • update VMs definition in host_vars/host-for-vms-2.yml - use that template for the second and all other vm_hosts
    • update/create host_vars for all defined VMs (e.g.: host_vars/vm-ctrl-1.cluster1.local.yml and host_vars/vm-work-1.cluster1.local.yml) In case that vm_cluster_name is not defined or is empty, short host_vars file names should be used for VMs (e.g.: host_vars/vm-ctrl-1.yml and host_vars/vm-work-1.yml) Needed details are at least dataplane_interfaces For more details see VM case configuration guide
  10. Mandatory: Apply patch for Kubespray collection.

    ansible-playbook -i inventory.ini playbooks/k8s/patch_kubespray.yml
  11. Execute ansible-playbook.

    NOTE: For Cloud case this step is not used. See the cloud/ directory for more details

    NOTE: It is recommended to use "--flush-cache" (e.g. "ansible-playbook -i --flush-cache inventory.ini playbooks/remote_fp.yml") when executing ansible-playbook in order to avoid unknown issues such as skip of tasks/roles, unable to update previous run inventory details, etc.

    ansible-playbook -i inventory.ini playbooks/${PROFILE}.yml

    NOTE: For on_prem_aibox case, need to add "-b -K" flags for localhost deployment.

    ansible-playbook -i inventory.ini -b -K playbooks/on_prem_aibox.yml

    or, for VM case:

    ansible-playbook -i inventory.ini playbooks/vm.yml

    NOTE: VMs are accessible from ansible host via ssh vm-ctrl-1 or ssh vm-work-1

Cleanup

Refer to the documentation to see details about how to cleanup existing deployment or specific feature.

Configuration

Refer to the documentation linked below to see configuration details for selected capabilities and deployment profiles.

Prerequisites and Requirements

  • Required packages on the target servers: Python3.

  • Required packages on the ansible host (where ansible playbooks are run): Python3.8-3.10 and Pip3.

  • Required python packages on the ansible host. See requirements.txt.

  • SSH keys copied to all Kubernetes cluster nodes (ssh-copy-id <user>@<host> command can be used for that).

  • For VM case SSH keys copied to all VM hosts (ssh-copy-id <user>@<host> command can be used for that).

  • Internet access on all target servers is mandatory. Proxy is supported.

  • At least 8GB of RAM on the target servers/VMs for minimal number of functions (some Docker image builds are memory-hungry and may cause OOM kills of Docker registry - observed with 4GB of RAM), more if you plan to run heavy workloads such as NFV applications.

  • For the RHEL-like OSes SELinux must be configured prior to the CEK deployment and required SELinux-related packages should be installed. CEK itself is keeping initial SELinux state but SELinux-related packages might be installed during k8s cluster deployment as a dependency, for Docker engine e.g., causing OS boot failure or other inconsistencies if SELinux is not configured properly. Preferable SELinux state is permissive.

    For more details, please, refer to the respective OS documentation.

Contributing

Contributors, beside basic set of packages, should also install developer packages, using command:

pipenv install --dev

or

pip install -r ci-requirements.txt

Run lint checks locally

Several lint checks are configured for the repository. All of them can be run on local environment using prepared bash scripts or by leveraging pre-commit hooks.

Prerequisite packages:

  • developer python packages (ci-requirements.txt/Pipfile)
  • shellcheck
  • pre-commit python package

Required checks in CI:

  • ansible-lint
  • bandit
  • pylint
  • shellcheck

Check can be run by following command:

./scrits/run_<linter_name>.sh

or alternatively:

pre-commit run <linter_name> --all-files

container-experience-kits's People

Contributors

ahalimx86 avatar calinghe avatar charliekang avatar dependabot[bot] avatar jagibso2 avatar lmdaly avatar longzhifang avatar michaelspedersen avatar mrymsza avatar p00rman avatar pmossakx avatar przemeklal avatar rachela999 avatar sreemanti-ghosh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

container-experience-kits's Issues

sriov-cni build fails on ubuntu20 (cek version v21.09)

Environment:

  • Cloud provider or hardware configuration: openstack
  • OS: Ubuntu 20.04.2 LTS
  • Version of Ansible (ansible --version): ansible 2.9.20
  • Version of Python (python --version): Python 3.8.10
  • CEK version: v21.09
  • Network plugin used: flannel

Inventory:

[all]
mossapio-k8s-master-1 ansible_user=root
mossapio-k8s-node-1 ansible_user=root
localhost ansible_connection=local ansible_python_interpreter=/usr/bin/python3

[kube_control_plane]
mossapio-k8s-master-1

[etcd]
mossapio-k8s-master-1

[kube_node]
mossapio-k8s-master-1
mossapio-k8s-node-1

[k8s_cluster:children]
kube_control_plane
kube_node

[all:vars]
ansible_python_interpreter=/usr/bin/python3

Command used to invoke ansible: ansible-playbook -i examples/full_nfv/inventory.ini playbooks/full_nfv.yml

Output of ansible run:

TASK [sriov_cni_install : build sriov-cni plugin] *********************************************************************************************************************************************************
fatal: [mossapio-k8s-node-1]: FAILED! => {                                                                                                                                                                 
    "changed": false,                                                                                                                                                                                      
    "cmd": "/usr/bin/make",                                                                                                                                                                                
    "rc": 2                                                                                                                                                                                                
}                                                                                                                                                                                                          
                                                                                                                                                                                                           
STDOUT:                                                                                                                                                                                                    
                                                                                                                                                                                                           
Running gofmt...                                                                                                                                                                                           
Setting GOPATH...                                                                                                                                                                                          
Building golint...                                                                                                                                                                                         
Running golint...                                                                                                                                                                                          
Creating build directory...                                                                                                                                                                                
                                                                                                                                                                                                           
                                                                                                                                                                                                           
                                                                                                                                                                                                           
STDERR:                                                                                                                                                                                                    
                                                                                                                                                                                                           
go: downloading github.com/containernetworking/cni v0.8.1                                                                                                                                                  
go: downloading github.com/containernetworking/plugins v0.8.7                                                                                                                                              
go: downloading github.com/vishvananda/netlink v1.0.1-0.20190924205540-07ace697bea4                                                                                                                        
go: downloading golang.org/x/sys v0.0.0-20210510120138-977fb7262007                                                                                                                                        
go: downloading github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc                                                                                                                            
go: downloading github.com/coreos/go-iptables v0.4.5                                                                                                                                                       
go: downloading github.com/safchain/ethtool v0.0.0-20190326074333-42ed695e3de8                                                                                                                             
go: downloading golang.org/x/lint v0.0.0-20210508222113-6edffad5e616                                                                                                                                       
go: downloading golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7                                                                                                                                      
go: downloading golang.org/x/tools v0.1.8                                                                                                                                                                  
go get: added golang.org/x/lint v0.0.0-20210508222113-6edffad5e616                                                                                                                                         
go get: upgraded golang.org/x/sys v0.0.0-20210510120138-977fb7262007 => v0.0.0-20211019181941-9d821ace8654                                                                                                 
go get: added golang.org/x/tools v0.1.8                                                                                                                                                                    
../../../../../../pkg/mod/github.com/containernetworking/[email protected]/pkg/ns/ns_linux.go:24:2: missing go.sum entry for module providing package golang.org/x/sys/unix (imported by github.com/containern
etworking/plugins/pkg/ns); to add:                                                                                                                                                                         
        go get github.com/containernetworking/plugins/pkg/[email protected]                                                                                                                                        
make: *** [Makefile:59: /usr/src/sriov-cni/build/sriov] Error 1                                                                                                                                            
                                                                                                                                                                                                           
                                                                                                                                                                                                           
                                                                                                                                                                                                           
MSG:                                                                                                                                                                                                       
                                                                                                                                                                                                           
go: downloading github.com/containernetworking/cni v0.8.1                                                                                                                                                  
go:********@v0.8.7                                                                                                                                                                                         
make: *** [Makefile:59: /usr/src/sriov-cni/build/sriov] Error 1

Anything else do we need to know:
I know it's sriov-cni related but I'm reporting it here as well together with quick potential fix: updating sriov-cni version to v2.6.2 fixes the build but I'm not sure whether something was not pinned to v2.6.1, this is why I said it is a potential fix:

git diff roles/sriov_cni_install/
diff --git a/roles/sriov_cni_install/defaults/main.yml b/roles/sriov_cni_install/defaults/main.yml
index 4f6c82e..b377119 100644
--- a/roles/sriov_cni_install/defaults/main.yml
+++ b/roles/sriov_cni_install/defaults/main.yml
@@ -14,6 +14,6 @@
 ##   limitations under the License.
 ##
 ---
-sriov_cni_version: "v2.6.1"
+sriov_cni_version: "v2.6.2"
 sriov_cni_url: "https://github.com/k8snetworkplumbingwg/sriov-cni.git"
 sriov_cni_dir: "/usr/src/sriov-cni"

With v.2.6.2 the build is fine:

TASK [install_dependencies : refresh repository cache] ****************************************************************************************************************************************************
changed: [mossapio-k8s-node-1]
changed: [mossapio-k8s-master-1]

TASK [install_dependencies : install packages] ************************************************************************************************************************************************************
ok: [mossapio-k8s-node-1]
ok: [mossapio-k8s-master-1]

TASK [sriov_cni_install : clone sriov-cni repository] *****************************************************************************************************************************************************
ok: [mossapio-k8s-master-1]
changed: [mossapio-k8s-node-1]

TASK [sriov_cni_install : build sriov-cni plugin] *********************************************************************************************************************************************************
changed: [mossapio-k8s-master-1]
changed: [mossapio-k8s-node-1]

TASK [sriov_cni_install : create /opt/cni/bin] ************************************************************************************************************************************************************
ok: [mossapio-k8s-node-1]
ok: [mossapio-k8s-master-1]

TASK [sriov_cni_install : install sriov-cni binary to /opt/cni/bin directory] *****************************************************************************************************************************
changed: [mossapio-k8s-node-1]
changed: [mossapio-k8s-master-1]

missing Makefile

Walking through the latest README of the release 21.09 i found that it now mentions to use make to generate the example host_vars, group_vars and the inventory. Currently this is not possible since this there is no Makefile in this repository. This commit 9b9a465 changes the README, but does not add the needed Makefile (maybe it was forgotten to add it?).

Collect pods in CrashLoopBack

Context
Deployment of BMRA in virtual environment.

Symptoms
Collectd pods are in crashloop after deployment.
Root cause is that a conditional flag in the collectd daemonset present in BMRA 21.09 has been removed in BMRA 22.01 (the git blame doesn't show this change).

Fix
Live-fixed by manual editing of the running collectd daemonset and removing the mount, deployment-fix by restoring the flag


and set it to false in group_vars/all.yml

v1.4.1: Role "net-attach-defs-create" fails during installation

The task defined in net-attach-defs-create role install pip module to manage K8s objects (satisfy requirement of the k8s_raw module fails with the error message:

Found existing installation: PyYAML 3.10\n\n:stderr: DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support\nERROR: Cannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.

This can be fixed by adding

extra_args: --ignore-installed PyYAML'

to the respective task in

roles/net-attach-defs-create/tasks/main.yml

About removed support for CMK

Hi,
I noticed that in release 22.01 the support for Intel CPU Manager for Kubernetes (CMK) has been removed.
I have a DPDK application in kubernetes that is using it so I wondered why Intel has decided to remove the support for it... should I use instead the --cpu-manager=static option builtin in Kubelet?
Or there's some other replacement that is more appropriate for CPU pinning in Kubernetes?

Thanks!

pip 9.0.3 needs to be installed on remote hosts as well

In order for a successful BMRA deployment, pip version 9.0.3 needs to be installed on the remote/k8s hosts. With newer pip versions, the installation fails.
For example, the following could be added in roles/bootstrap/install_packages/tasks/rhel.yml to get rid of the issue:

- name: Install recommended pip version 
   pip:
     name:
       - pip==9.0.3 

It would be also good to mention this in the documentation, which currently only mentions to have pip 9.0.3 on the Ansible machine.

deduplicate the code within the PROFILE specific playbooks

Currently there are 6 different flavours of deployment available (basic, access, full_nfv, on_prem, regional_dc and remote_fp). To use those flavours, there are currently 6 almost identical playbooks in the playbooks/ folder. The only difference between those playbooks is the name of the infra/ and intel/ playbook they import. Looking further down into those playbooks, there is also a large amount of similarities between those infra and intel playbooks that are meant to be specific for the chosen profile.
This amount of code duplication makes it unnecessarily complicated to add, update or refactor common parts, since those need to always be updated 6 times, to be reflected in all playbooks. In addition to that, the chance of a fix or refactoring step to only make it into one of those playbooks is pretty high.
To fix this issue, my current suggestion would be to introduce a variable in the inventory like profile= and ask the user (or the render.py) to set this variable to the chosen $PROFILE.
After that, it should be possible to merge all the profile specific playbooks into one common one for the main playbook, which would call the common ones for intel/ and infra/. Within those intel/ and infra/ playbooks, it should be possible to evaluate the value of the new profile= variable in conditionals (e.g. when:) to only run specific tasks for specific profiles.
Please let me know if you like the general idea and if so, how we can coordinate the needed work to walk into this direction.

install dependencies on localhost

Currently the playbooks (e.g. remote_fp) assume that they can and should install certain dependencies on localhost. Since most people run ansible from their local machine (e.g. their desktop, laptop or macbook), this means that the current playbooks try to modify the machines that they run on and not only the remote machines they should deploy.

https://github.com/intel/container-experience-kits/blob/master/playbooks/infra/remote_fp.yml#L17-L23
https://github.com/intel/container-experience-kits/blob/master/roles/bootstrap/ansible_host/tasks/main.yml

In my opinion this leads to a couple of issues:

  1. privilege escalation on localhost (so on my local machine) is something that i never want to grant to an ansible playbook downloaded from the internet (due to very basic security considerations) and imho it should never be needed
  2. my local environment might look very different from what the playbook expects (e.g. using venv or having my python dependencies in a different path)
  3. modifications on my local system should be left to me, or to be a bit more generic here: any modifications needed to run ansible playbooks should be mentioned in the docs, there might be a script or even an playbook to help, but this should not be forced by default

In conclusion i think the whole part of installing dependencies should be moved to documentation or at least made optional and be disabled by default.

Retrying failed CEK deployment breaks CMK

Hello,
My deployment (v21.08) failed on TAS installation due my config error.
When I retried the deployment after fixing TAS config, CMK pods went down to CrashLoopBackOff with 401 Unauthorized errors after TLS certs for CMK webhook have been recreated.
I will provide full ansible log later on with more details about the issue.

configure_intel_pstate does not reboot after changing grub config

Currently the tasks in the role boostrap/configure_intel_pstate change the grub config (https://github.com/intel/container-experience-kits/blob/master/roles/bootstrap/configure_intel_pstate/tasks/setup_intel_pstate.yml#L29-L38) and notify the reboot handler. By default all notifications will only be executed at the end of the playbook run (or at the end of the role in this case. This means, that by default the task that checks for certain states depending on those grub changes (https://github.com/intel/container-experience-kits/blob/master/roles/bootstrap/configure_intel_pstate/tasks/setup_turbo.yml#L37-L52) will fail, because the reboot handler has not been executed yet. I think this can be solved by flushing the handlers like described here (https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html#controlling-when-handlers-run) after the configuration is done and before the checks are run (e.g. here https://github.com/intel/container-experience-kits/blob/master/roles/bootstrap/configure_intel_pstate/tasks/main.yml#L31)

deployment fails while using passwordless sudo user

Hello!
At the moment, deploying CEK (v21.08) works only when ansible connects using root's user.
When I want to deploy it using other passwordless sudo enabled user (centos/ubuntu), it fails in multiple places:
ansible-playbook -i examples/full_nfv/inventory.ini playbooks/full_nfv.yml
My inventory:

[all]
mossapio-k8s-master-1 ansible_user=ubuntu
mossapio-k8s-node-1 ansible_user=ubuntu
localhost

[kube-master]
mossapio-k8s-master-1

[etcd]
mossapio-k8s-master-1

[kube-node]
mossapio-k8s-master-1
mossapio-k8s-node-1

[k8s-cluster:children]
kube-master
kube-node

[calico-rr]

[all:vars]
ansible_python_interpreter=/usr/bin/python3

I'll provide full ansible log later with the 1st encountered error.

using role/vars/main.yml to set defaults makes changing almost impossible

Currently a lot of the included roles (like bootstrap/configure_security/ or bootstrap/update_nic_firmware) use vars/main.yml to define the default variable values, instead of using defaults/main.yml. Following the precedence order defined here https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#understanding-variable-precedence, this makes it very hard to override those variable values (in case you need to open an additional port in your firewall or use a different firmware). IHMO most of those variables should be considered defaults, and should be moved to defaults/main.yml to allow the user to set them directly from group_vars/all.yml or host_vars/node1.yml, instead of requiring them to be passed on a even higher precedence (>15).

[RHEL8.2]Deployment failed for BMRA on_prem/remote_fp/full_nfv

-Failed log:
2020-11-08 19:38:44,658 p=21545 u=root n=ansible | TASK [bootstrap/update_nic_drivers : build and install iavf driver] ************
2020-11-08 19:38:44,660 p=21545 u=root n=ansible | changed: [ap09-00-wc] => (item=clean)
2020-11-08 19:38:50,250 p=21545 u=root n=ansible | failed: [ap09-00-wc] (item=install) => {"ansible_loop_var": "item", "changed": false, "cmd": "/usr/bin/gmake install", "item": "install", "msg": "In file included from /usr/src/iavf-4.0.1/src/iavf.h:43,\n from /usr/src/iavf-4.0.1/src/iavf_main.c:4:\n/usr/src/iavf-4.0.1/src/kcompat.h:2758:10: fatal error: linux/pci-aspm.h: No such file or directory\n #include <linux/pci-aspm.h>\n ^~~~~~~~~~~~~~~~~~\ncompilation terminated.\ngmake[2]: *** [scripts/Makefile.build:316: /usr/src/iavf-4.0.1/src/iavf_main.o] Error 1\ngmake[1]: *** [Makefile:1544: module/usr/src/iavf-4.0.1/src] Error 2\ngmake: *** [Makefile:60: default] Error 2", "rc": 2, "stderr": "In file included from /usr/src/iavf-4.0.1/src/iavf.h:43,\n from /usr/src/iavf-4.0.1/src/iavf_main.c:4:\n/usr/src/iavf-4.0.1/src/kcompat.h:2758:10: fatal error: linux/pci-aspm.h: No such file or directory\n #include <linux/pci-aspm.h>\n ^~~~~~~~~~~~~~~~~~\ncompilation terminated.\ngmake[2]: *** [scripts/Makefile.build:316: /usr/src/iavf-4.0.1/src/iavf_main.o] Error 1\ngmake[1]: *** [Makefile:1544: module/usr/src/iavf-4.0.1/src] Error 2\ngmake: *** [Makefile:60: default] Error 2\n", "stderr_lines": ["In file included from /usr/src/iavf-4.0.1/src/iavf.h:43,", " from /usr/src/iavf-4.0.1/src/iavf_main.c:4:", "/usr/src/iavf-4.0.1/src/kcompat.h:2758:10: fatal error: linux/pci-aspm.h: No such file or directory", " #include <linux/pci-aspm.h>", " ^~~~~~~~~~~~~~~~~~", "compilation terminated.", "gmake[2]: *** [scripts/Makefile.build:316: /usr/src/iavf-4.0.1/src/iavf_main.o] Error 1", "gmake[1]: *** [Makefile:1544: module/usr/src/iavf-4.0.1/src] Error 2", "gmake: *** [Makefile:60: default] Error 2"], "stdout": "*** The target kernel has CONFIG_MODULE_SIG_ALL enabled, but\n*** the signing key cannot be found. Module signing has been\n*** disabled for this build.\ngmake[1]: Entering directory '/usr/src/kernels/4.18.0-240.1.1.el8_3.x86_64'\n CC [M] /usr/src/iavf-4.0.1/src/iavf_main.o\ngmake[1]: Leaving directory '/usr/src/kernels/4.18.0-240.1.1.el8_3.x86_64'\n", "stdout_lines": ["*** The target kernel has CONFIG_MODULE_SIG_ALL enabled, but", "*** the signing key cannot be found. Module signing has been", "*** disabled for this build.", "gmake[1]: Entering directory '/usr/src/kernels/4.18.0-240.1.1.el8_3.x86_64'", " CC [M] /usr/src/iavf-4.0.1/src/iavf_main.o", "gmake[1]: Leaving directory '/usr/src/kernels/4.18.0-240.1.1.el8_3.x86_64'"]}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.