Giter Club home page Giter Club logo

avml's Introduction

AVML (Acquire Volatile Memory for Linux)

Summary

A portable volatile memory acquisition tool for Linux.

AVML is an X86_64 userland volatile memory acquisition tool written in Rust, intended to be deployed as a static binary. AVML can be used to acquire memory without knowing the target OS distribution or kernel a priori. No on-target compilation or fingerprinting is needed.

Features

  • Save recorded images to external locations via Azure Blob Store or HTTP PUT
  • Automatic Retry (in case of network connection issues) with exponential backoff for uploading to Azure Blob Store
  • Optional page level compression using Snappy.
  • Uses LiME output format (when not using compression).

Memory Sources

  • /dev/crash
  • /proc/kcore
  • /dev/mem

If the memory source is not specified on the commandline, AVML will iterate over the memory sources to find a functional source.

NOTE: If the kernel feature kernel_lockdown is enabled, AVML will not be able to acquire memory.

Tested Distributions

  • Ubuntu: 12.04, 14.04, 16.04, 18.04, 18.10, 19.04, 19.10, 20.04, 21.04, 22.04
  • Centos: 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.9
  • RHEL: 6.7, 6.8, 6.9, 7.0, 7.2, 7.3, 7.4, 7.5, 7.7, 8.5, 9.0
  • Debian: 8, 9, 10, 11, 12
  • Oracle Linux: 6.8, 6.9, 6.10, 7.3, 7.4, 7.5, 7.6, 7.9, 8.5, 9.0
  • CBL-Mariner: 1.0, 2.0

Getting Started

Capturing a compressed memory image

On the target host:

avml --compress output.lime.compressed

Capturing an uncompressed memory image

On the target host:

avml output.lime

Capturing a memory image & uploading to Azure Blob Store

On a secure host with az cli credentials, generate a SAS URL.

EXPIRY=$(date -d '1 day' '+%Y-%m-%dT%H:%MZ')
SAS_URL=$(az storage blob generate-sas --account-name ACCOUNT --container CONTAINER test.lime --full-uri --permissions c --output tsv --expiry ${EXPIRY})

On the target host, execute avml with the generated SAS token.

avml --sas-url ${SAS_URL} --delete output.lime

Capturing a memory image of an Azure VM using VM Extensions

On a secure host with az cli credentials, do the following:

  1. Generate a SAS URL (see above)
  2. Create config.json containing the following information:
{
    "commandToExecute": "./avml --compress --sas-url <GENERATED_SAS_URL> --delete",
    "fileUris": ["https://FULL.URL.TO.AVML.example.com/avml"]
}
  1. Execute the customScript extension with the specified config.json
az vm extension set -g RESOURCE_GROUP --vm-name VM_NAME --publisher Microsoft.Azure.Extensions -n customScript --settings config.json

To upload to AWS S3 or GCP Cloud Storage

On a secure host, generate a S3 pre-signed URL or generate a GCP pre-signed URL.

On the target host, execute avml with the generated pre-signed URL.

avml --put ${URL} --delete output.lime

To decompress an AVML-compressed image

avml-convert ./compressed.lime ./uncompressed.lime

To compress an uncompressed LiME image

avml-convert --source-format lime --format lime_compressed ./uncompressed.lime ./compressed.lime

Usage

A portable volatile memory acquisition tool

Usage: avml [OPTIONS] <FILENAME>

Arguments:
  <FILENAME>
          name of the file to write to on local system

Options:
      --compress
          compress via snappy

      --source <SOURCE>
          specify input source

          Possible values:
          - /dev/crash:
            Provides a read-only view of physical memory.  Access to memory using this device must be paged aligned and read one page at a time
          - /dev/mem:
            Provides a read-write view of physical memory, though AVML opens it in a read-only fashion.  Access to to memory using this device can be disabled using the kernel configuration options `CONFIG_STRICT_DEVMEM` or `CONFIG_IO_STRICT_DEVMEM`
          - /proc/kcore:
            Provides a virtual ELF coredump of kernel memory.  This can be used to access physical memory

      --max-disk-usage <MAX_DISK_USAGE>
          Specify the maximum estimated disk usage (in MB)

      --max-disk-usage-percentage <MAX_DISK_USAGE_PERCENTAGE>
          Specify the maximum estimated disk usage to stay under

      --url <URL>
          upload via HTTP PUT upon acquisition

      --delete
          delete upon successful upload

      --sas-url <SAS_URL>
          upload via Azure Blob Store upon acquisition

      --sas-block-size <SAS_BLOCK_SIZE>
          specify maximum block size in MiB

      --sas-block-concurrency <SAS_BLOCK_CONCURRENCY>
          specify blob upload concurrency

          [default: 10]

  -h, --help
          Print help (see a summary with '-h')

  -V, --version
          Print version

Building on Ubuntu

# Install MUSL
sudo apt-get install musl-dev musl-tools musl

# Install Rust via rustup
curl https://sh.rustup.rs -sSf | sh -s -- -y

# Add the MUSL target for Rust
rustup target add x86_64-unknown-linux-musl

# Build
cargo build --release --target x86_64-unknown-linux-musl

# Build without upload functionality
cargo build --release --target x86_64-unknown-linux-musl --no-default-features

Testing on Azure

The testing scripts will create, use, and cleanup a number of resource groups, virtual machines, and a storage account.

  1. Install az cli
  2. Login to your Azure subscription using: az login
  3. Build avml (see above)
  4. ./eng/test-on-azure.sh

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Reporting Security Issues

Security issues and bugs should be reported privately, via email, to the Microsoft Security Response Center (MSRC) at [email protected]. You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Further information, including the MSRC PGP key, can be found in the Security TechCenter.

avml's People

Contributors

bmc-msft avatar cole14 avatar demoray avatar dependabot[bot] avatar digitalisx avatar iljavs avatar microsoft-github-policy-service[bot] avatar microsoftopensource avatar msftgits avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

avml's Issues

Error when running on CentOS 6.10 x64

Upon running I get the following error

./avml mem.dmp
thread 'main' panicked at 'invalid range', src/libcore/option.rs:1190:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
Aborted (core dumped)

Kernel release is 2.6.32-754.30.2.el6.x86_64

Splitting memory dump before S3 upload

Would there be a way for you all to split the resulting memory dump into parts less than 5GB once acquired and then ship to S3? Running into issues with hitting the 5GB AWS pre-signed URL upload while trying to ship the full memory file. The mem is being compressed, however it still exceeds this 5GB limit. Any insight would be greatly appreciated! Thanks!

Feature request: stream memdump over the network

For forensic purposes, it is desirable to leave as small a (memory and disk) footprint as possible on the system whose memory is being dumped.
Regarding the disk footprint, it would for this reason be useful to be able to stream the memory directly across the network to another system.

Request: have compressed images use a different extension that identifies them as being compressed

I've spent several hours troubleshooting my Volatility3 configuration, only to find that my memory capture (captured with AVML) was compressed, which is what was actually causing the problem.

After sharing my mistake with colleagues and other investigators, I was told by several people that they have done the exact same thing - wasted several hours of an investigation troubleshooting, only to find out that their image was compressed.

An issue has been opened with Volatility3 for better error handling of AVML-compressed images but, the issue is rooted in AVML.

If compressed image file names were appended with a different file extension, for example, ".compressed", this would likely mitigate many user errors and also allow the maintainers of volatility to more gracefully handle issues related to AVML compressed images.

Functional on Fedora?

When I run sudo avml avml.out, I get Error: "unable to read physical memory".

Is there a way to get it to work?

I installed via cargo install avml.

[Request] Output to stdout

Thank you for the amazing tool. One request for the team. Could it be possible to send the output to stdout instead to a file? A lot of times the space on the disk is smaller than the size of the memory, and compress is not enough. Currently I am using CIFS, S3FS or mounting an additional volume into the VM and format it to be able to dump into an external storage that it is mounted locally. However this creates unwanted artefacts.

If I was able to send output to stdout, I could pipe it to any of the tools to transfer files without touching disk: rsync, aws s3, scp, etc; Thank you!

NOTE: aware of the risk of delaying the memory acquisition due to dumping to an external device, but usually if same data center we are talking about 150-100 mb/s

Build fails with only 'put' feature enabled.

Building AVML fails when only the put feature is enabled, as in cargo build --release --no-default-features --features put.

There are two causes:

  • The put feature of AVML requires tokio-util, but tokio-util is not listed as a dependency of put in Cargo.toml.
  • The put feature of AVML requires the stream feature of the reqwest dependency, which is not enabled in Cargo.toml.
    A full-featured build of AVML does not fail, so presumably the stream feature of reqwest is indirectly enabled by some other dependency.

Add Stdout Option

Hi,

Would it be possible to add stdout output as an option so that I can then pipe the output directly to another tool like socat? Thanks

Error while running on Amazon Linux 2

Attempting to get avml to run on an amazon linux 2 instance (uname -r == 4.14.225-168.357.amzn2.x86_64), however continue to get an "Unable to read physical memory" error.

Steps taken:

  1. Logged onto instance as ec2-user and elevated to root
  2. Pulled down the latest release-v0.2.1 of AVML via wget (md5:4d360eba0cdba02adf0900f64ae3dc1c), ran "chmod +x avml"
  3. Ran (as root) "./avml memory.mem", got above mentioned error
  4. Also tried "./avml --compress memory.mem", got same error

I have also attempted building avml with rust/cargo, tested on a ubuntu vm (it worked), transferred over to the AL2 instance, got same error. Please assist! Thank you.

`Unable to read memory` on x86-64 Ubuntu

I was testing this out on an Ubuntu 22.04 (64-bit) virtual machine under VMware Fusion, and having some issues. Same results when I install version 0.6.1 from cargo or when I build and run from the current HEAD of main, which is:

mmyers@ubuntu-22-04-vm:~/Desktop/avml$ /home/mmyers/.cargo/bin/avml --version
avml 0.6.1
mmyers@ubuntu-22-04-vm:~/Desktop/avml$ sudo /home/mmyers/.cargo/bin/avml ~/Desktop/quick_test_0.6.1.lime
Error: error: unable to read memory
caused by:
    0: unable to create memory snapshot:     
        error: unable to create memory snapshot from source: /dev/crash
        caused by:
            0: unable to create memory snapshot
            1: unable to read memory
            2: No such file or directory (os error 2)
        
        error: unable to create memory snapshot from source: /proc/kcore
        caused by:
            0: unable to find memory range: 175116312..175124567
        
        error: unable to create memory snapshot from source: /dev/mem
        caused by:
            0: unable to create memory snapshot
            1: write block failed: 1048576..175116311

The filesystem should have ~55 GB free, and the RAM to capture should be ~12GB uncompressed. But this seems more of a failure to read memory than to dump it? I understand that AVML will iterate over the memory sources to find a functional source. so I suppose all three methods failed here.

This looks like #73 but I will try to be responsive here to figure out what this is.

[Feature Request] Ability to Upload Normal Files

One of the neat features in AVML is the ability to securely upload to cloud storage using pre-signed URLs. This is a feature that I haven't seen often in other tools (you typically need your cloud credentials on the target machine which is dangerous). Would it be possible to add the ability to upload a normal file instead of just memory captures? This may be useful in cases where you haven't determined yet whether you need to perform a full investigation that warrants a full disk backup + full memory capture. Thanks

Netcat or Socat

Is it possible to sent the memorydump via socat or netcat, without saving to disk directly ?

unable to read memory

i have some eror when i memory dump the eror is unable to read memory
please for solutions

I have successfully built on 32-bit architecture

Hi, thanks for this great project.

I needed to use this in i686 (32) bit architecture. So I tried to build from source code. I used the following steps.

sudo apt-get install musl-dev musl-tools musl
curl https://sh.rustup.rs -sSf | sh -s -- -y
rustup target add i686-unknown-linux-musl
cargo build --release --target i686-unknown-linux-musl
cargo build --release --target i686-unknown-linux-musl --no-default-features

The only problem was block size calculations in src/upload/blobstore.rs which overflows in 32 bit Linux.

const BLOB_MAX_BLOCKS: usize = 50_000;
const BLOB_MAX_BLOCK_SIZE: usize = ONE_MB * 4000;
const BLOB_MAX_FILE_SIZE: usize = BLOB_MAX_BLOCKS * BLOB_MAX_BLOCK_SIZE;
const BLOB_MIN_BLOCK_SIZE: usize = ONE_MB * 5;
const MAX_CONCURRENCY: usize = 10;
const REASONABLE_BLOCK_SIZE: usize = ONE_MB * 100;

Because I am not going to use blob storage upload feature, I changed them almost like zero.

After this little change I have successfully built and made it work on 32 bit.

Hope this helps you on your work for 32 bit support. Because I don't know Rust, I can't open a PR for this.

Binary not available ?

Hello,
would very nice to have binary instead of source because it's a big problem to compile it in an airgapped machine.
My router and my pc are infected and I want take a ram dump by avml but I need to built it in an offline machine, if I connect the clean machine to internet by the infected router even the clean machine will be infected. Then I have to take everything i need to built avml by the infected and online machine. Then to be honest it's a complicated and not risk less job.
To have binaries for main distributions like ubuntu would be very appreciated.

Support for Alpine Linux?

Hi!

First of all, thanks for a great tool!
I'm wondering if there is any future support for Alpine Linux?
Especially for the purpose of testing to run AVML as a privileged container on Alpine Linux in a GKE/Kubernetes cluster and do memory forensics on the host.

Error when running on CentOS 6.9 x64

Hi, encounter problem when running on CentOS 6.9 x64

Here the backtrace

$ RUST_BACKTRACE=full ./avml-old mem.dump
thread 'main' panicked at 'invalid range', src/libcore/option.rs:1190:5
stack backtrace:
   0:           0x5b77f6 - backtrace::backtrace::libunwind::trace::h23eff9b732072ec6
                               at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.37/src/backtrace/libunwind.rs:88
   1:           0x5b77f6 - backtrace::backtrace::trace_unsynchronized::he99443e3e043181b
                               at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.37/src/backtrace/mod.rs:66
   2:           0x5b77f6 - std::sys_common::backtrace::_print_fmt::h372c2c133ace10bb
                               at src/libstd/sys_common/backtrace.rs:76
   3:           0x5b77f6 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h45d79e0e31af34d1
                               at src/libstd/sys_common/backtrace.rs:60
   4:           0x4a6f5c - core::fmt::write::h6412e51ee7d5337d
                               at src/libcore/fmt/mod.rs:1030
   5:           0x5b7116 - std::io::Write::write_fmt::hc321dd9edd55d777
                               at src/libstd/io/mod.rs:1412
   6:           0x5b6e00 - std::sys_common::backtrace::_print::h308bfa161bac386c
                               at src/libstd/sys_common/backtrace.rs:64
   7:           0x5b6e00 - std::sys_common::backtrace::print::h444e6c0fa7bcde58
                               at src/libstd/sys_common/backtrace.rs:49
   8:           0x5b6e00 - std::panicking::default_hook::{{closure}}::h0556362794d99987
                               at src/libstd/panicking.rs:196
   9:           0x5b676a - std::panicking::default_hook::h9a006f22406ae1a5
                               at src/libstd/panicking.rs:210
  10:           0x5b676a - std::panicking::rust_panic_with_hook::h00181808d6f69bb1
                               at src/libstd/panicking.rs:473
  11:           0x5b62fe - std::panicking::continue_panic_fmt::hd4650be798eb9cfd
                               at src/libstd/panicking.rs:380
  12:           0x5c0906 - rust_begin_unwind
                               at src/libstd/panicking.rs:307
  13:           0x4a4959 - core::panicking::panic_fmt::h6a2a10b60da43dee
                               at src/libcore/panicking.rs:85
  14:           0x4a7686 - core::option::expect_failed::h53ee0d97814b8f6b
                               at src/libcore/option.rs:1190
  15:           0x41d672 - core::option::Option<T>::expect::he0be438802b52b75
  16:           0x401d09 - avml::main::h13066322a5076777
  17:           0x41305e - std::rt::lang_start::{{closure}}::h212053e7f23315d2
  18:           0x405021 - main
Aborted

The result of cat /proc/iomem

$ cat /proc/iomem 
00000000-00000fff : reserved
00001000-0009ffff : System RAM
000a0000-000bffff : PCI Bus 0000:00
000c0000-000c7fff : Video ROM
000f0000-000fffff : System ROM
00100000-54d93fff : System RAM
  01000000-01556eb4 : Kernel code
  01556eb5-01c2170f : Kernel data
  01d77000-02045963 : Kernel bss
  03000000-0c8fffff : System RAM
54d94000-54da2fff : reserved
  54d98018-54d98067 : APEI ERST
  54d98070-54d98077 : APEI ERST
54da3000-5a12dfff : System RAM
5a12e000-6a1aefff : reserved
6a1af000-6c65ffff : System RAM
6c660000-6c691fff : reserved
6c692000-6de34fff : System RAM
6de35000-793fefff : reserved
793ff000-7b3fefff : ACPI Non-volatile Storage
  7b3ed000-7b3eebff : APEI ERST
7b3ff000-7b791fff : ACPI Tables
7b792000-7b7fffff : System RAM
7b800000-7bffffff : RAM buffer
80000000-8fffffff : PCI MMCONFIG 0 [00-ff]
  80000000-8fffffff : reserved
90000000-c7ffbfff : PCI Bus 0000:00
  90000000-90ffffff : PCI Bus 0000:10
    90000000-90ffffff : PCI Bus 0000:11
      90000000-90ffffff : PCI Bus 0000:12
        90000000-90ffffff : PCI Bus 0000:13
          90000000-90ffffff : 0000:13:00.0
          90000000-902fffff : efifb
  91000000-919fffff : PCI Bus 0000:10
    91000000-919fffff : PCI Bus 0000:11
      91000000-918fffff : PCI Bus 0000:12
        91000000-918fffff : PCI Bus 0000:13
          91000000-917fffff : 0000:13:00.0
          91800000-91803fff : 0000:13:00.0
      91900000-919fffff : PCI Bus 0000:14
  91a00000-91cfffff : PCI Bus 0000:01
    91a00000-91a1ffff : 0000:01:00.7
      91a00000-91a1ffff : lpfc
    91a20000-91a3ffff : 0000:01:00.7
      91a20000-91a3ffff : lpfc
    91a40000-91a5ffff : 0000:01:00.6
      91a40000-91a5ffff : lpfc
    91a60000-91a7ffff : 0000:01:00.6
      91a60000-91a7ffff : lpfc
    91a80000-91a9ffff : 0000:01:00.5
      91a80000-91a9ffff : lpfc
    91aa0000-91abffff : 0000:01:00.5
      91aa0000-91abffff : lpfc
    91ac0000-91adffff : 0000:01:00.4
      91ac0000-91adffff : lpfc
    91ae0000-91afffff : 0000:01:00.4
      91ae0000-91afffff : lpfc
    91b00000-91b1ffff : 0000:01:00.3
      91b00000-91b1ffff : be2net
    91b20000-91b3ffff : 0000:01:00.3
      91b20000-91b3ffff : be2net
    91b40000-91b5ffff : 0000:01:00.2
      91b40000-91b5ffff : be2net
    91b60000-91b7ffff : 0000:01:00.2
      91b60000-91b7ffff : be2net
    91b80000-91b9ffff : 0000:01:00.1
      91b80000-91b9ffff : be2net
    91ba0000-91bbffff : 0000:01:00.1
      91ba0000-91bbffff : be2net
    91bc0000-91bdffff : 0000:01:00.0
      91bc0000-91bdffff : be2net
    91be0000-91bfffff : 0000:01:00.0
      91be0000-91bfffff : be2net
    91c00000-91c03fff : 0000:01:00.7
      91c00000-91c03fff : lpfc
    91c04000-91c07fff : 0000:01:00.6
      91c04000-91c07fff : lpfc
    91c08000-91c0bfff : 0000:01:00.5
      91c08000-91c0bfff : lpfc
    91c0c000-91c0ffff : 0000:01:00.4
      91c0c000-91c0ffff : lpfc
    91c10000-91c13fff : 0000:01:00.3
      91c10000-91c13fff : be2net
    91c14000-91c17fff : 0000:01:00.2
      91c14000-91c17fff : be2net
    91c18000-91c1bfff : 0000:01:00.1
      91c18000-91c1bfff : be2net
    91c1c000-91c1ffff : 0000:01:00.0
      91c1c000-91c1ffff : be2net
    91c80000-91cfffff : 0000:01:00.0
  91d00000-91efffff : PCI Bus 0000:15
    91d00000-91dfffff : 0000:15:00.0
    91e00000-91e0ffff : 0000:15:00.0
      91e00000-91e0ffff : megasas: LSI
  91f00000-91ffffff : PCI Bus 0000:0b
    91f00000-91f03fff : 0000:0b:00.1
      91f00000-91f03fff : lpfc
    91f04000-91f07fff : 0000:0b:00.0
      91f04000-91f07fff : lpfc
    91f08000-91f08fff : 0000:0b:00.1
      91f08000-91f08fff : lpfc
    91f09000-91f09fff : 0000:0b:00.0
      91f09000-91f09fff : lpfc
  92000000-920003ff : 0000:00:1d.0
    92000000-920003ff : ehci_hcd
  92001000-920013ff : 0000:00:1a.0
    92001000-920013ff : ehci_hcd
  92003000-92003fff : 0000:00:05.4
  92010000-9201ffff : 0000:00:11.0
  92100000-924fffff : PCI Bus 0000:01
    92100000-9217ffff : 0000:01:00.1
    92180000-921fffff : 0000:01:00.2
    92200000-9227ffff : 0000:01:00.3
    92280000-922fffff : 0000:01:00.4
    92300000-9237ffff : 0000:01:00.5
    92380000-923fffff : 0000:01:00.6
    92400000-9247ffff : 0000:01:00.7
  92500000-925fffff : PCI Bus 0000:0b
    92500000-9253ffff : 0000:0b:00.0
    92540000-9257ffff : 0000:0b:00.1
  92600000-926fffff : PCI Bus 0000:15
    92600000-9261ffff : 0000:15:00.0
c7ffc000-c7ffcfff : dmar1
c8000000-fbffbfff : PCI Bus 0000:80
  c8000000-c8000fff : 0000:80:05.4
fbffc000-fbffcfff : dmar0
fec00000-fec003ff : IOAPIC 0
fec01000-fec013ff : IOAPIC 1
fec40000-fec403ff : IOAPIC 2
fed00000-fed003ff : HPET 0
fed12000-fed1200f : pnp 00:07
fed12010-fed1201f : pnp 00:07
fed1b000-fed1bfff : pnp 00:07
fed1c000-fed1ffff : reserved
  fed1f410-fed1f414 : iTCO_wdt.0.auto
fed45000-fed8bfff : pnp 00:07
fee00000-feefffff : pnp 00:07
  fee00000-fee00fff : Local APIC
ff000000-ffffffff : reserved
  ff000000-ffffffff : pnp 00:07
100000000-607fffffff : System RAM
380000000000-383fffffffff : PCI Bus 0000:00
  383ffffe0000-383ffffeffff : 0000:00:14.0
    383ffffe0000-383ffffeffff : xhci_hcd
  383fffff1000-383fffff10ff : 0000:00:1f.3
  383fffff2000-383fffff200f : 0000:00:16.1
  383fffff3000-383fffff300f : 0000:00:16.0
384000000000-387fffffffff : PCI Bus 0000:80

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.