Giter Club home page Giter Club logo

windows_exporter's Introduction

windows_exporter

Build Status

A Prometheus exporter for Windows machines.

Collectors

Name Description Enabled by default
ad Active Directory Domain Services
adcs Active Directory Certificate Services
adfs Active Directory Federation Services
cache Cache metrics
cpu CPU usage
cpu_info CPU Information
cs "Computer System" metrics (system properties, num cpus/total memory)
container Container metrics
dfsr DFSR metrics
dhcp DHCP Server
dns DNS Server
exchange Exchange metrics
fsrmquota Microsoft File Server Resource Manager (FSRM) Quotas collector
hyperv Hyper-V hosts
iis IIS sites and applications
logical_disk Logical disks, disk I/O
logon User logon sessions
memory Memory usage metrics
mscluster_cluster MSCluster cluster metrics
mscluster_network MSCluster network metrics
mscluster_node MSCluster Node metrics
mscluster_resource MSCluster Resource metrics
mscluster_resourcegroup MSCluster ResourceGroup metrics
msmq MSMQ queues
mssql SQL Server Performance Objects metrics
netframework_clrexceptions .NET Framework CLR Exceptions
netframework_clrinterop .NET Framework Interop Metrics
netframework_clrjit .NET Framework JIT metrics
netframework_clrloading .NET Framework CLR Loading metrics
netframework_clrlocksandthreads .NET Framework locks and metrics threads
netframework_clrmemory .NET Framework Memory metrics
netframework_clrremoting .NET Framework Remoting metrics
netframework_clrsecurity .NET Framework Security Check metrics
net Network interface I/O
os OS metrics (memory, processes, users)
process Per-process metrics
remote_fx RemoteFX protocol (RDP) metrics
scheduled_task Scheduled Tasks metrics
service Service state metrics
smb SMB Server
smtp IIS SMTP Server
system System calls
tcp TCP connections
teradici_pcoip Teradici PCoIP session metrics
time Windows Time Service
thermalzone Thermal information
terminal_services Terminal services (RDS)
textfile Read prometheus metrics from a text file
vmware_blast VMware Blast session metrics
vmware Performance counters installed by the Vmware Guest agent

See the linked documentation on each collector for more information on reported metrics, configuration settings and usage examples.

Filtering enabled collectors

The windows_exporter will expose all metrics from enabled collectors by default. This is the recommended way to collect metrics to avoid errors when comparing metrics of different families.

For advanced use the windows_exporter can be passed an optional list of collectors to filter metrics. The collect[] parameter may be used multiple times. In Prometheus configuration you can use this syntax under the scrape config.

  params:
    collect[]:
      - foo
      - bar

This can be useful for having different Prometheus servers collect specific metrics from nodes.

Flags

windows_exporter accepts flags to configure certain behaviours. The ones configuring the global behaviour of the exporter are listed below, while collector-specific ones are documented in the respective collector documentation above.

Flag Description Default value
--web.listen-address host:port for exporter. :9182
--telemetry.path URL path for surfacing collected metrics. /metrics
--telemetry.max-requests Maximum number of concurrent requests. 0 to disable. 5
--collectors.enabled Comma-separated list of collectors to use. Use [defaults] as a placeholder which gets expanded containing all the collectors enabled by default." [defaults]
--collectors.print If true, print available collectors and exit.
--scrape.timeout-margin Seconds to subtract from the timeout allowed by the client. Tune to allow for overhead or high loads. 0.5
--web.config.file A web config for setting up TLS and Auth None
--config.file Using a config file from path or URL None
--config.file.insecure-skip-verify Skip TLS when loading config file from URL false

Installation

The latest release can be downloaded from the releases page.

Each release provides a .msi installer. The installer will setup the windows_exporter as a Windows service, as well as create an exception in the Windows Firewall.

If the installer is run without any parameters, the exporter will run with default settings for enabled collectors, ports, etc. The following parameters are available:

Name Description
ENABLED_COLLECTORS As the --collectors.enabled flag, provide a comma-separated list of enabled collectors
LISTEN_ADDR The IP address to bind to. Defaults to 0.0.0.0
LISTEN_PORT The port to bind to. Defaults to 9182.
METRICS_PATH The path at which to serve metrics. Defaults to /metrics
TEXTFILE_DIRS As the --collector.textfile.directories flag, provide a directory to read text files with metrics from
REMOTE_ADDR Allows setting comma separated remote IP addresses for the Windows Firewall exception (allow list). Defaults to an empty string (any remote address).
EXTRA_FLAGS Allows passing full CLI flags. Defaults to an empty string.

Parameters are sent to the installer via msiexec. Example invocations:

msiexec /i <path-to-msi-file> ENABLED_COLLECTORS=os,iis LISTEN_PORT=5000

Example service collector with a custom query.

msiexec /i <path-to-msi-file> ENABLED_COLLECTORS=os,service --% EXTRA_FLAGS="--collector.service.services-where ""Name LIKE 'sql%'"""

On some older versions of Windows, you may need to surround parameter values with double quotes to get the installation command parsing properly:

msiexec /i C:\Users\Administrator\Downloads\windows_exporter.msi ENABLED_COLLECTORS="ad,iis,logon,memory,process,tcp,textfile,thermalzone" TEXTFILE_DIRS="C:\custom_metrics\"

To install the exporter with creating a firewall exception, use the following command:

msiexec /i <path-to-msi-file> ADD_FIREWALL_EXCEPTION=yes

Powershell versions 7.3 and above require PSNativeCommandArgumentPassing to be set to Legacy when using --% EXTRA_FLAGS:

$PSNativeCommandArgumentPassing = 'Legacy'
msiexec /i <path-to-msi-file> ENABLED_COLLECTORS=os,service --% EXTRA_FLAGS="--collector.service.services-where ""Name LIKE 'sql%'"""

Kubernetes Implementation

See detailed steps to install on Windows Kubernetes here.

Supported versions

windows_exporter supports Windows Server versions 2016 and later, and desktop Windows version 10 and 11 (21H2 or later).

Windows Server 2012 and 2012R2 are supported as best-effort only, but not guaranteed to work.

Usage

go get -u github.com/prometheus/promu
go get -u github.com/prometheus-community/windows_exporter
cd $env:GOPATH/src/github.com/prometheus-community/windows_exporter
promu build -v
.\windows_exporter.exe

The prometheus metrics will be exposed on localhost:9182

Examples

Enable only service collector and specify a custom query

.\windows_exporter.exe --collectors.enabled "service" --collector.service.services-where "Name='windows_exporter'"

Enable only process collector and specify a custom query

.\windows_exporter.exe --collectors.enabled "process" --collector.process.include="firefox.+"

When there are multiple processes with the same name, WMI represents those after the first instance as process-name#index. So to get them all, rather than just the first one, the regular expression must use .+. See process for more information.

Using [defaults] with --collectors.enabled argument

Using [defaults] with --collectors.enabled argument which gets expanded with all default collectors.

.\windows_exporter.exe --collectors.enabled "[defaults],process,container"

This enables the additional process and container collectors on top of the defaults.

Using a configuration file

YAML configuration files can be specified with the --config.file flag. e.g. .\windows_exporter.exe --config.file=config.yml. If you are using the absolute path, make sure to quote the path, e.g. .\windows_exporter.exe --config.file="C:\Program Files\windows_exporter\config.yml"

It is also possible to load the configuration from a URL. e.g. .\windows_exporter.exe --config.file="https://example.com/config.yml"

If you need to skip TLS verification, you can use the --config.file.insecure-skip-verify flag. e.g. .\windows_exporter.exe --config.file="https://example.com/config.yml" --config.file.insecure-skip-verify

collectors:
  enabled: cpu,cs,net,service
collector:
  service:
    services-where: "Name='windows_exporter'"
log:
  level: warn

An example configuration file can be found here.

Configuration file notes

Configuration file values can be mixed with CLI flags. E.G.

.\windows_exporter.exe --collectors.enabled=cpu,logon

log:
  level: debug

CLI flags enjoy a higher priority over values specified in the configuration file.

License

Under MIT

windows_exporter's People

Contributors

6fears7 avatar benridley avatar breed808 avatar carlpett avatar dependabot[bot] avatar dinifarb avatar exgolden avatar hairyhenderson avatar higels avatar iambtshft avatar jammiemil avatar jkroepke avatar jsturtevant avatar kaffarell avatar latere-a-latere avatar mallenlf avatar markdordoy avatar martinlindhe avatar mattdurham avatar max-len avatar mjtrangoni avatar mousavian avatar prombot avatar rebortg avatar retryw avatar stewartthomson avatar superq avatar tehseenshahab avatar tpowelldev avatar webalexeu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

windows_exporter's Issues

Win7 system (amd64) "system collector .. (Invalid query )"

Hi,
just tried to run wmi_exporter on a win7 system. It works but I get a view error messages for failed queries similar to the ones in #47:

λ wmi_exporter.exe
time="2017-01-05T08:57:06+01:00" level=info msg="Enabled collectors: cpu, cs, logical_disk, net, os, system" source="exporter.go:156"
time="2017-01-05T08:57:06+01:00" level=info msg="Starting WMI exporter (version=, branch=, revision=)" source="exporter.go:167"
time="2017-01-05T08:57:06+01:00" level=info msg="Build context (go=go1.7.4, user=, date=)" source="exporter.go:168"
time="2017-01-05T08:57:06+01:00" level=info msg="Starting server on :9182" source="exporter.go:171"
2017/01/05 08:57:21 [ERROR] failed collecting net metrics: <nil> Exception occurred. (Invalid query )
time="2017-01-05T08:57:21+01:00" level=error msg="ERROR: net collector failed after 6.745000s: Exception occurred. (Invalid query )" source="exporter.go:84"
2017/01/05 08:57:26 [ERROR] failed collecting os metrics: <nil> Exception occurred. (Invalid query )
time="2017-01-05T08:57:26+01:00" level=error msg="ERROR: cpu collector failed after 12.275000s: Exception occurred. (Invalid query )" source="exporter.go:84"
2017/01/05 08:57:33 [ERROR] failed collecting os metrics: <nil> Exception occurred. (Invalid query )
time="2017-01-05T08:57:33+01:00" level=error msg="ERROR: system collector failed after 18.798000s: Exception occurred. (Invalid query )" source="exporter.go:84"
2017/01/05 08:57:38 [ERROR] failed collecting logical_disk metrics: <nil> Exception occurred. (Invalid query )
time="2017-01-05T08:57:38+01:00" level=error msg="ERROR: logical_disk collector failed after 24.264000s: Exception occurred. (Invalid query )" source="exporter.go:84"
2017/01/05 08:57:45 [ERROR] failed collecting os metrics: <nil> Exception occurred. (Invalid query )
time="2017-01-05T08:57:45+01:00" level=error msg="ERROR: cpu collector failed after 6.434000s: Exception occurred. (Invalid query )" source="exporter.go:84"
2017/01/05 08:57:50 [ERROR] failed collecting logical_disk metrics: <nil> Exception occurred. (Invalid query )
time="2017-01-05T08:57:50+01:00" level=error msg="ERROR: logical_disk collector failed after 11.955000s: Exception occurred. (Invalid query )" source="exporter.go:84"
2017/01/05 08:57:57 [ERROR] failed collecting net metrics: <nil> Exception occurred. (Invalid query )
time="2017-01-05T08:57:57+01:00" level=error msg="ERROR: net collector failed after 18.576000s: Exception occurred. (Invalid query )" source="exporter.go:84"
2017/01/05 08:58:02 [ERROR] failed collecting os metrics: <nil> Exception occurred. (Invalid query )
time="2017-01-05T08:58:02+01:00" level=error msg="ERROR: system collector failed after 23.978000s: Exception occurred. (Invalid query )" source="exporter.go:84"

The version is a local build of the current master b4ca341

Here are the working metrics:

# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 18
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 805256
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 805256
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 2706
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 178
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 131072
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 805256
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 548864
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 1.318912e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 7059
# HELP go_memstats_heap_released_bytes_total Total number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes_total counter
go_memstats_heap_released_bytes_total 0
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.867776e+06
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 0
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 6
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 7237
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 9344
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 19840
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 32768
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.194304e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 804206
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 229376
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 229376
# HELP go_memstats_sys_bytes Number of bytes obtained by system. Sum of all system allocations.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 3.084288e+06
# HELP http_request_duration_microseconds The HTTP request latencies in microseconds.
# TYPE http_request_duration_microseconds summary
http_request_duration_microseconds{handler="prometheus",quantile="0.5"} NaN
http_request_duration_microseconds{handler="prometheus",quantile="0.9"} NaN
http_request_duration_microseconds{handler="prometheus",quantile="0.99"} NaN
http_request_duration_microseconds_sum{handler="prometheus"} 0
http_request_duration_microseconds_count{handler="prometheus"} 0
# HELP http_request_size_bytes The HTTP request sizes in bytes.
# TYPE http_request_size_bytes summary
http_request_size_bytes{handler="prometheus",quantile="0.5"} NaN
http_request_size_bytes{handler="prometheus",quantile="0.9"} NaN
http_request_size_bytes{handler="prometheus",quantile="0.99"} NaN
http_request_size_bytes_sum{handler="prometheus"} 0
http_request_size_bytes_count{handler="prometheus"} 0
# HELP http_response_size_bytes The HTTP response sizes in bytes.
# TYPE http_response_size_bytes summary
http_response_size_bytes{handler="prometheus",quantile="0.5"} NaN
http_response_size_bytes{handler="prometheus",quantile="0.9"} NaN
http_response_size_bytes{handler="prometheus",quantile="0.99"} NaN
http_response_size_bytes_sum{handler="prometheus"} 0
http_response_size_bytes_count{handler="prometheus"} 0
# HELP wmi_cs_logical_processors ComputerSystem.NumberOfLogicalProcessors
# TYPE wmi_cs_logical_processors gauge
wmi_cs_logical_processors 8
# HELP wmi_cs_physical_memory_bytes ComputerSystem.TotalPhysicalMemory
# TYPE wmi_cs_physical_memory_bytes gauge
wmi_cs_physical_memory_bytes 1.6971317248e+10
# HELP wmi_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which wmi_exporter was built.
# TYPE wmi_exporter_build_info gauge
wmi_exporter_build_info{branch="",goversion="go1.7.4",revision="",version=""} 1
# HELP wmi_exporter_scrape_duration_seconds wmi_exporter: Duration of a scrape job.
# TYPE wmi_exporter_scrape_duration_seconds summary
wmi_exporter_scrape_duration_seconds{collector="cpu",result="error",quantile="0.5"} 5.8420000000000005
wmi_exporter_scrape_duration_seconds{collector="cpu",result="error",quantile="0.9"} 5.8420000000000005
wmi_exporter_scrape_duration_seconds{collector="cpu",result="error",quantile="0.99"} 5.8420000000000005
wmi_exporter_scrape_duration_seconds_sum{collector="cpu",result="error"} 5.8420000000000005
wmi_exporter_scrape_duration_seconds_count{collector="cpu",result="error"} 1
wmi_exporter_scrape_duration_seconds{collector="cs",result="success",quantile="0.5"} 12.78
wmi_exporter_scrape_duration_seconds{collector="cs",result="success",quantile="0.9"} 12.78
wmi_exporter_scrape_duration_seconds{collector="cs",result="success",quantile="0.99"} 12.78
wmi_exporter_scrape_duration_seconds_sum{collector="cs",result="success"} 12.78
wmi_exporter_scrape_duration_seconds_count{collector="cs",result="success"} 1
wmi_exporter_scrape_duration_seconds{collector="logical_disk",result="error",quantile="0.5"} 12.756
wmi_exporter_scrape_duration_seconds{collector="logical_disk",result="error",quantile="0.9"} 12.756
wmi_exporter_scrape_duration_seconds{collector="logical_disk",result="error",quantile="0.99"} 12.756
wmi_exporter_scrape_duration_seconds_sum{collector="logical_disk",result="error"} 12.756
wmi_exporter_scrape_duration_seconds_count{collector="logical_disk",result="error"} 1
wmi_exporter_scrape_duration_seconds{collector="net",result="error",quantile="0.5"} 19.645
wmi_exporter_scrape_duration_seconds{collector="net",result="error",quantile="0.9"} 19.645
wmi_exporter_scrape_duration_seconds{collector="net",result="error",quantile="0.99"} 19.645
wmi_exporter_scrape_duration_seconds_sum{collector="net",result="error"} 19.645
wmi_exporter_scrape_duration_seconds_count{collector="net",result="error"} 1
wmi_exporter_scrape_duration_seconds{collector="os",result="success",quantile="0.5"} 12.813
wmi_exporter_scrape_duration_seconds{collector="os",result="success",quantile="0.9"} 12.813
wmi_exporter_scrape_duration_seconds{collector="os",result="success",quantile="0.99"} 12.813
wmi_exporter_scrape_duration_seconds_sum{collector="os",result="success"} 12.813
wmi_exporter_scrape_duration_seconds_count{collector="os",result="success"} 1
wmi_exporter_scrape_duration_seconds{collector="system",result="error",quantile="0.5"} 26.469
wmi_exporter_scrape_duration_seconds{collector="system",result="error",quantile="0.9"} 26.469
wmi_exporter_scrape_duration_seconds{collector="system",result="error",quantile="0.99"} 26.469
wmi_exporter_scrape_duration_seconds_sum{collector="system",result="error"} 26.469
wmi_exporter_scrape_duration_seconds_count{collector="system",result="error"} 1
# HELP wmi_os_paging_free_bytes OperatingSystem.FreeSpaceInPagingFiles
# TYPE wmi_os_paging_free_bytes gauge
wmi_os_paging_free_bytes 1.6971317248e+10
# HELP wmi_os_paging_limit_bytes OperatingSystem.SizeStoredInPagingFiles
# TYPE wmi_os_paging_limit_bytes gauge
wmi_os_paging_limit_bytes 1.6971317248e+10
# HELP wmi_os_physical_memory_free_bytes OperatingSystem.FreePhysicalMemory
# TYPE wmi_os_physical_memory_free_bytes gauge
wmi_os_physical_memory_free_bytes 1.1571462144e+10
# HELP wmi_os_process_memory_limix_bytes OperatingSystem.MaxProcessMemorySize
# TYPE wmi_os_process_memory_limix_bytes gauge
wmi_os_process_memory_limix_bytes 8.796092891136e+12
# HELP wmi_os_processes OperatingSystem.NumberOfProcesses
# TYPE wmi_os_processes gauge
wmi_os_processes 135
# HELP wmi_os_processes_limit OperatingSystem.MaxNumberOfProcesses
# TYPE wmi_os_processes_limit gauge
wmi_os_processes_limit 4.294967295e+09
# HELP wmi_os_users OperatingSystem.NumberOfUsers
# TYPE wmi_os_users gauge
wmi_os_users 3
# HELP wmi_os_virtual_memory_bytes OperatingSystem.TotalVirtualMemorySize
# TYPE wmi_os_virtual_memory_bytes gauge
wmi_os_virtual_memory_bytes 3.3940729856e+10
# HELP wmi_os_virtual_memory_free_bytes OperatingSystem.FreeVirtualMemory
# TYPE wmi_os_virtual_memory_free_bytes gauge
wmi_os_virtual_memory_free_bytes 2.8137496576e+10
# HELP wmi_os_visible_memory_bytes OperatingSystem.TotalVisibleMemorySize
# TYPE wmi_os_visible_memory_bytes gauge
wmi_os_visible_memory_bytes 1.6971317248e+10

Release 0.1

I created a 0.1 milestone as a way to track things that I think are part of a minimum viable first release.

Anything else that could be seen as required to be useful?

silent install msi

Hi,
it looks like the msi currently doesn't support /q* /quiet /passive flags for install/uninstall. Adding this would be nice for installs via configuration management tools like salt, ansible, ..

Since the installation doesn't need any manual intervention anyways I figure it could be added easily?

TLS authentication?

Hi,

Have you thought about implementing TLS for the endpoint. I really would prefer to secure the metric endpoints at least with a client cert check.

I know it breaks the normal Prometheus pattern of doing authentication with a proxy.
But on Windows, we really don't have many options for lightweight proxy setup.

Shouldn't be too hard to implement, I could even take a look at writing a patch maybe.

D

installer: windows event log messages are hard to read

@carlpett wrote

since we do not have a "MessageFile", all eventlog messages are prefixed with a few paragraphs about not knowing how to format the message. The message itself turns up at the end, so not a deal breaker, but pretty irritating.

Three options:

  1. Ignore it
  2. Create a message file. This is a bit bothersome, sadly, and requires a few winsdk tools to compile.
  3. Piggyback on some existing message file. This will create an implicit dependency though, so not very nice (eg to the .net framework)

(related to msi installer, introduced in #19)

Feature: CPU metrics

Possibly relevant WMI class: Win32_PerfRawData_Counters_ProcessorInformation, provides these fields:

Name (socket#+core#)
AverageIdleTime
AverageIdleTime_Base
C1TransitionsPersec
C2TransitionsPersec
C3TransitionsPersec
ClockInterruptsPersec
DPCRate
DPCsQueuedPersec
IdleBreakEventsPersec
InterruptsPersec
ParkingStatus
PercentC1Time
PercentC2Time
PercentC3Time
PercentDPCTime
PercentIdleTime
PercentInterruptTime
PercentofMaximumFrequency
PercentPerformanceLimit
PercentPriorityTime
PercentPrivilegedTime
PercentPrivilegedUtility
PercentPrivilegedUtility_Base
PercentProcessorPerformance
PercentProcessorPerformance_Base
PercentProcessorTime
PercentProcessorUtility
PercentProcessorUtility_Base
PercentUserTime
PerformanceLimitFlags
ProcessorFrequency
ProcessorStateFlags

Haven't found any MSDN documentation of this class, so potentially there might be better ones? It was the only one reported on my test machine, though

Feature: Services states

Hi Martin,

It would be useful to get Windows Service states exposed in a similar fashion that node_exporter exposes systemd states.

i.e. (example from node_exporter

node_systemd_unit_state{name="syslog.service",state="activating"} 0
node_systemd_unit_state{name="syslog.service",state="active"} 0
node_systemd_unit_state{name="syslog.service",state="deactivating"} 0
node_systemd_unit_state{name="syslog.service",state="failed"} 0
node_systemd_unit_state{name="syslog.service",state="inactive"} 1

Some good information at the below urls:

https://msdn.microsoft.com/en-us/library/aa394418(v=vs.85).aspx
https://github.com/bosun-monitor/bosun/blob/master/cmd/scollector/collectors/processes_windows.go

Set up a CI

Would like to attach travis-ci or circleci or simlar to the project. Need to figure out if/how they support running go code on Windows.

[featurerequest] Add PercentProcessorTime Counter

Hi there,

Would it be possible to add the PercentProcessorTime counter to the CPU collector?

Sample query:
select PercentProcessorTime from Win32_PerfFormattedData_PerfOS_Processor where Name = '_Total'
“PercentProcessorTime,” as appears above, is the percentage of the time the processor is busy doing non-idle threads.

Cheers,
Christian

CPU collector blocks every ~17 minutes on call to wmi.Query

We've deployed the wmi-exporter on a number of our Windows systems running Server 2012r2 standard. This was a version built several months ago with Go 1.8.

These systems typically push 20-30 Gbps of traffic and have 24 hyperthreads. There is generally plenty of room on half of these threads.

Having deployed the exporter, we observed that every 17-18 minutes (independent of when the exporter was started), the traffic being served by them would drop significantly for roughly 120 seconds and CPU consumption on one or two cores would spike. The "WMI Provider Host" would also increase in CPU utilisation during this time.

Killing the exporter did not cause the load to drop off immediately.

We also noticed that calls to the exporter would block during this time.

We set GOMAXPROCS to 2, to see if there was any concurrency issues that might be overloading WMI, but that didn't improve matters.

We built a new version from master on Friday June 30th with Go 1.8.1. The issue persisted.

We started to disable collectors to isolate where the problem was occurring and found that it only happened when the CPU collector was enabled. We put some printfs around the calls inside the collect function and found that it was blocking on the call to wmi.Query.

Interestingly, we did not see any pauses or breaks in the data while monitoring that data in perfmon while the query was running.

We dialled testing back to a 2 CPU VM running Windows 10 not doing anything particularly difficult or interesting and polling using "curl $blah ; sleep 60" in a loop.

The results suggest that something is going wrong every 17-18 minutes (the lines with *** were added by me):

time="2017-07-03T10:31:38-07:00" level=debug msg="OK: cpu collector succeeded after 0.032996s." source="exporter.go:90"
2017/07/03 10:32:38 *** Starting CPU Collector run...
2017/07/03 10:32:38 *** Created WMI Query for CPU...
2017/07/03 10:32:40 *** Ran WMI Query for CPU...
2017/07/03 10:32:40 *** Sending data for 0 to exporter...
2017/07/03 10:32:40 *** Sending data for 1 to exporter...
time="2017-07-03T10:32:40-07:00" level=debug msg="OK: cpu collector succeeded after 2.324834s." source="exporter.go:90"
2017/07/03 10:33:40 *** Starting CPU Collector run...
2017/07/03 10:33:40 *** Created WMI Query for CPU...
2017/07/03 10:33:40 *** Ran WMI Query for CPU...
2017/07/03 10:33:40 *** Sending data for 0 to exporter...
2017/07/03 10:33:40 *** Sending data for 1 to exporter...
time="2017-07-03T10:33:40-07:00" level=debug msg="OK: cpu collector succeeded after 0.046001s." source="exporter.go:90"
2017/07/03 10:34:40 *** Starting CPU Collector run...
2017/07/03 10:34:40 *** Created WMI Query for CPU...
2017/07/03 10:34:40 *** Ran WMI Query for CPU...
2017/07/03 10:34:40 *** Sending data for 0 to exporter...
2017/07/03 10:34:40 *** Sending data for 1 to exporter...

< snip a few more ~0.04 second runs >

time="2017-07-03T10:46:41-07:00" level=debug msg="OK: cpu collector succeeded after 0.044845s." source="exporter.go:90"
2017/07/03 10:47:41 *** Starting CPU Collector run...
2017/07/03 10:47:41 *** Created WMI Query for CPU...
2017/07/03 10:47:41 *** Ran WMI Query for CPU...
2017/07/03 10:47:41 *** Sending data for 0 to exporter...
2017/07/03 10:47:41 *** Sending data for 1 to exporter...
time="2017-07-03T10:47:41-07:00" level=debug msg="OK: cpu collector succeeded after 0.038003s." source="exporter.go:90"
2017/07/03 10:48:41 *** Starting CPU Collector run...
2017/07/03 10:50:05 *** Created WMI Query for CPU...
2017/07/03 10:50:08 *** Ran WMI Query for CPU...
2017/07/03 10:50:08 *** Sending data for 0 to exporter...
2017/07/03 10:50:08 *** Sending data for 1 to exporter...
time="2017-07-03T10:50:08-07:00" level=debug msg="OK: cpu collector succeeded after 86.787446s." source="exporter.go:90"
07/03 10:51:08 *** Starting CPU Collector run...
2017/07/03 10:51:08 *** Created WMI Query for CPU...
2017/07/03 10:51:08 *** Ran WMI Query for CPU...
2017/07/03 10:51:08 *** Sending data for 0 to exporter...
2017/07/03 10:51:08 *** Sending data for 1 to exporter...
time="2017-07-03T10:51:08-07:00" level=debug msg="OK: cpu collector succeeded after 0.034084s." source="exporter.go:90"

I tested again using official version v0.2.5 and again observed an unexpected increase in gathering time ~17 minutes after the last spike.

time="2017-07-03T11:01:03-07:00" level=info msg="Starting server on :9182" source="exporter.go:206"
time="2017-07-03T11:01:04-07:00" level=debug msg="OK: cpu collector succeeded after 0.186394s." source="exporter.go:90"
time="2017-07-03T11:02:04-07:00" level=debug msg="OK: cpu collector succeeded after 0.022045s." source="exporter.go:90"
time="2017-07-03T11:03:04-07:00" level=debug msg="OK: cpu collector succeeded after 0.025063s." source="exporter.go:90"
time="2017-07-03T11:04:05-07:00" level=debug msg="OK: cpu collector succeeded after 0.031924s." source="exporter.go:90"
time="2017-07-03T11:05:05-07:00" level=debug msg="OK: cpu collector succeeded after 0.027066s." source="exporter.go:90"
time="2017-07-03T11:06:07-07:00" level=debug msg="OK: cpu collector succeeded after 0.036162s." source="exporter.go:90"
time="2017-07-03T11:07:10-07:00" level=debug msg="OK: cpu collector succeeded after 3.061992s." source="exporter.go:90"
time="2017-07-03T11:08:10-07:00" level=debug msg="OK: cpu collector succeeded after 0.036285s." source="exporter.go:90"
time="2017-07-03T11:09:10-07:00" level=debug msg="OK: cpu collector succeeded after 0.036368s." source="exporter.go:90"
time="2017-07-03T11:10:11-07:00" level=debug msg="OK: cpu collector succeeded after 0.187498s." source="exporter.go:90"
time="2017-07-03T11:11:07-07:00" level=debug msg="OK: cpu collector succeeded after 0.031065s." source="exporter.go:90"
time="2017-07-03T11:12:07-07:00" level=debug msg="OK: cpu collector succeeded after 0.030048s." source="exporter.go:90"
time="2017-07-03T11:13:07-07:00" level=debug msg="OK: cpu collector succeeded after 0.030922s." source="exporter.go:90"
time="2017-07-03T11:14:08-07:00" level=debug msg="OK: cpu collector succeeded after 0.030062s." source="exporter.go:90"
time="2017-07-03T11:15:08-07:00" level=debug msg="OK: cpu collector succeeded after 0.024992s." source="exporter.go:90"
time="2017-07-03T11:16:08-07:00" level=debug msg="OK: cpu collector succeeded after 0.030092s." source="exporter.go:90"
time="2017-07-03T11:17:08-07:00" level=debug msg="OK: cpu collector succeeded after 0.030072s." source="exporter.go:90"
time="2017-07-03T11:18:08-07:00" level=debug msg="OK: cpu collector succeeded after 0.029085s." source="exporter.go:90"
time="2017-07-03T11:19:08-07:00" level=debug msg="OK: cpu collector succeeded after 0.031077s." source="exporter.go:90"
time="2017-07-03T11:20:08-07:00" level=debug msg="OK: cpu collector succeeded after 0.031975s." source="exporter.go:90"
time="2017-07-03T11:21:08-07:00" level=debug msg="OK: cpu collector succeeded after 0.034084s." source="exporter.go:90"
time="2017-07-03T11:22:08-07:00" level=debug msg="OK: cpu collector succeeded after 0.031888s." source="exporter.go:90"
time="2017-07-03T11:23:11-07:00" level=debug msg="OK: cpu collector succeeded after 2.759751s." source="exporter.go:90"
time="2017-07-03T11:24:11-07:00" level=debug msg="OK: cpu collector succeeded after 0.036913s." source="exporter.go:90"

I don't understand WMI enough to understand where to start digging, but it would be useful to understand if anyone else observes this.

If there's other data that is useful to gather. please let me know.

missing IIS metrics

Enabled the IIS collector, but on some of my machines the only metric that involves iis are these:

wmi_exporter_scrape_duration_seconds{collector="iis",result="success",quantile="0.5"} 0.0549788
wmi_exporter_scrape_duration_seconds{collector="iis",result="success",quantile="0.9"} 0.0810007
wmi_exporter_scrape_duration_seconds{collector="iis",result="success",quantile="0.99"} 0.0810007
wmi_exporter_scrape_duration_seconds_sum{collector="iis",result="success"} 0.462801
wmi_exporter_scrape_duration_seconds_count{collector="iis",result="success"} 9

Other machines do show the expected information, like this one:

wmi_iis_anonymous_users_total

Any pointers on how to approach this?

Feature Request: IIS HTTP Status

Hi,

I have a feature request to extend the WMI Exporter to expose metrics capturing the following:

  • The total amount of 4** HTTP Status return codes received from a site for a given HTTP call

  • The total amount of 5** HTTP status return codes received from a site for a given HTTP call

I know at the moment you expose the total amount of HTTP calls for a given type e.g GET. I was thinking it would be good to see something like the following:

wmi_iis_requests_total{method="GET",site="ExampleSite",status="5**"}

Additionally, I think it would be useful to extend the current IIS exporter to include these metrics in order to get throughput metrics:

BytesSentPerSec
BytesReceivedPerSec
BytesTotalPerSec

which is already included as part of the Win32_PerfRawData_W3SVC_WebService

Apologies if you have this available already!

Cheers

/ should serve some welcome page instead of redirecting

Currently we do a redirect from / to /metrics. Drawback to this is that there is no request that can be done to check that everything is alright (http server is working, at least) without also incurring the cost of running the wmi queries.

Our use case is that we have / as a http healthcheck endpoint for exporters which we register in Consul. If we want to keep the redirect, we should have some endpoint which indicates that things are ok, at least.

[Draft] 0.1 announcement

planning on posting something like this to https://groups.google.com/forum/#!forum/prometheus-developers and maybe other places?

(PLEASE EDIT BELOW TEXT)

Announcing the release of wmi_exporter 0.1

wmi_exporter is a prometheus exporter for Windows Management Instrumentation,
an instrumentation API for the Windows kernel available since Windows 2000.

The current version implements exporting for system memory, cpu, disk i/o, networking
and IIS.

We feel the current implementation has reached a state where it is ready for large scale testing
and invite everyone to try it out.

Installation and usage instructions can be found at
https://github.com/martinlindhe/wmi_exporter

Regards,
Martin Lindhe
Calle Pettersson

Feature: SQL Server

Does SQL server expose WMI metrics?
If so, it would be very nice to also support that in this project.

msi "Installation failed" on Windows server 2003 64-bit

I got these 4 events:

Event Type:	Information
Event Source:	MsiInstaller
Event Category:	None
Event ID:	1040
Date:		12/9/2016
Time:		13:38:57
User:		<host>\<user>
Computer:	<host>
Description:
Beginning a Windows Installer transaction: D:\wmi_exporter-0.1.1-amd64.msi. Client Process Id: 8076.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
Event Type:	Information
Event Source:	MsiInstaller
Event Category:	None
Event ID:	1042
Date:		12/9/2016
Time:		13:38:57
User:		NT AUTHORITY\SYSTEM
Computer:	<host>
Description:
Ending a Windows Installer transaction: D:\wmi_exporter-0.1.1-amd64.msi. Client Process Id: 8076.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
Event Type:	Information
Event Source:	MsiInstaller
Event Category:	None
Event ID:	11708
Date:		12/9/2016
Time:		13:38:57
User:		<host>\<user>
Computer:	<host>
Description:
Product: WMI Exporter -- Installation failed.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
Data:
0000: 7b 43 31 46 39 44 42 38   {C1F9DB8
0008: 41 2d 38 30 44 41 2d 34   A-80DA-4
0010: 31 41 33 2d 39 41 35 46   1A3-9A5F
0018: 2d 34 37 38 34 39 30 37   -4784907
0020: 44 41 33 37 38 7d         DA378}  
Event Type:	Information
Event Source:	MsiInstaller
Event Category:	None
Event ID:	1033
Date:		12/9/2016
Time:		13:38:57
User:		<host>\<user>
Computer:	<host>
Description:
Windows Installer installed the product. Product Name: WMI Exporter. Product Version: 0.1.1. Product Language: 1033. Installation success or error status: 1603.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
Data:
0000: 7b 43 31 46 39 44 42 38   {C1F9DB8
0008: 41 2d 38 30 44 41 2d 34   A-80DA-4
0010: 31 41 33 2d 39 41 35 46   1A3-9A5F
0018: 2d 34 37 38 34 39 30 37   -4784907
0020: 44 41 33 37 38 7d         DA378}  

Make wmi_system_system_up_time format more Prometheus friendly

The metric wmi_system_system_up_time is exported as Pico-seconds since system-start.
To visualize it you have to use something like

wmi_system_system_up_time{instance="$instance"}/1000000000000

It would be more convenient to export the system start as Unix timestamp. See Prometheus-Timestamp-Doc.
This should result in a query like:
time() - wmi_system_system_up_time{instance="$instance"}
Prometheus would also be able to better compress the time series.

Automatically convert WMI metrics to prometheus

@brian-brazil wrote in #5

I'm still perusing the docs, but I have a crazy idea.

It looks like it's possible to get to CounterType with WMI (though we'll need to use a different WMI library). Given that and presuming that metric naming is relatively consistent we may be able to fully automate the transformation of metric names and the user would only have to provide a WMI class name. We'll likely still need some hardcoded rules to fix e.g. bytes->megabytes.

I also note we cannot completely rely on documentation alone for CounterTypes - several of the ones in this PR changed in Windows 2000 (which means this won't quite work on XP).

The counter types is what changed between versions.

What I'd see happening for IIS/SQL is that you'd run the WMI exporter but with different flags just pulling in IIS/SQL metrics.

wmi_system_system_up_time changes without reboot

Last night, wmi_system_system_up_time changed on a handful of my machines, even though they were not rebooted. The changes were very small, for example 1484753057.401196 to 1484753057.4005969 (difference 0.0005991).

We should investigate why this happens, since it triggers reboot alerts even for such very small changes.

Also, the name should probably be changed, since it is not actually uptime, but rather the timestamp of the last boot. The node_exporter name is node_boot_time, so wmi_system_boot_time?

Feature: Active Directory

Already in progress, using the Win32_PerfRawData_DirectoryServices_DirectoryServices counters. This will just cover the "Active Directory - Directory Services" part of the Active Directory, not the web services etc (there are over 150 counters exposed by this class alone...)

addidig custom metrics

may be we could re-use influx telegraph model of adding queries:
https://github.com/influxdata/telegraf/tree/master/plugins/inputs/win_perf_counters

so user may add missing wmi metrics without changing the code?
what d u say?

thanks

Example: from (telegraf github)
Generic Queries
[[inputs.win_perf_counters.object]]
# Processor usage, alternative to native, reports on a per core.
ObjectName = "Processor"
Instances = [""]
Counters = ["% Idle Time", "% Interrupt Time", "% Privileged Time", "% User Time", "% Processor Time"]
Measurement = "win_cpu"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (
).

[[inputs.win_perf_counters.object]]
# Disk times and queues
ObjectName = "LogicalDisk"
Instances = [""]
Counters = ["% Idle Time", "% Disk Time","% Disk Read Time", "% Disk Write Time", "% User Time", "Current Disk Queue Length"]
Measurement = "win_disk"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (
).

Wrong flags in Usage helper text

I was trying to manually run wmi_exproter.exe with custom options, because I wanted the port to listen at 9100 (like node_exporter).

However, the flag listed in the command-line usage wasn't working, issuing the error flag provided but not defined: -addr:

PS C:\Program Files\wmi_exporter> .\wmi_exporter.exe -addr ":9100"
flag provided but not defined: -addr
Usage of C:\Program Files\wmi_exporter\wmi_exporter.exe:
  -collector.iis.app-blacklist string
        Regexp of apps to blacklist. App name must both match whitelist and not match blacklist to be included.
  -collector.iis.app-whitelist string
        Regexp of apps to whitelist. App name must both match whitelist and not match blacklist to be included. (default
 ".+")
  -collector.iis.site-blacklist string
        Regexp of sites to blacklist. Site name must both match whitelist and not match blacklist to be included.
  -collector.iis.site-whitelist string
        Regexp of sites to whitelist. Site name must both match whitelist and not match blacklist to be included. (defau
lt ".+")
  -collector.logical_disk.volume-blacklist string
        Regexp of volumes to blacklist. Volume name must both match whitelist and not match blacklist to be included.
  -collector.logical_disk.volume-whitelist string
        Regexp of volumes to whitelist. Volume name must both match whitelist and not match blacklist to be included. (d
efault ".+")
  -collector.net.nic-blacklist string
        Regexp of NIC:s to blacklist. NIC name must both match whitelist and not match blacklist to be included.
  -collector.net.nic-whitelist string
        Regexp of NIC:s to whitelist. NIC name must both match whitelist and not match blacklist to be included. (defaul
t ".+")
  -collectors.enabled string
        Comma-separated list of collectors to use. Use '[default]' as a placeholder for all the collectors enabled by de
fault (default "cpu,cs,logical_disk,net,os,service,system")
  -collectors.print
        If true, print available collectors and exit.
  -log.format value
        Set the log target and format. Example: "logger:syslog?appname=bob&local=7" or "logger:stdout?json=true" (defaul
t "logger:stderr")
  -log.level value
        Only log messages with the given severity or above. Valid levels: [debug, info, warn, error, fatal] (default "in
fo")
  -telemetry.addr string
        host:port for WMI exporter. (default ":9182")
  -telemetry.path string
        URL path for surfacing collected metrics. (default "/metrics")
  -version
        Print version information.

So I found out that it needed to use two hyphens (--addr ":9100") instead of one (-addr ":9100"), for every option.

So I think a fix is needed in the Usage text.

/metrics URL suddenly stopped working

Hi,
first of all, thank you for this great tool!

I just have a very strange situation: on 1 of my servers, the export suddenly stopped working (everything was fine for more than a month!).

I tried to update to the latest version (0.2.2), but the issue is still appearing.

Issue

When I launch the following command:

wmi_exporter.exe -log.format logger:eventlog?name=wmi_exporter -collectors.enabled "cpu,cs,iis,logical_disk,net,os,system" -telemetry.addr :9182

the /health URI returns {"status":"ok"}, but the /metrics does not respond.

I tried to enable the different collectors only one a a time and I got this result:

  • Working fine (/metrics responds with the expected metrics):
    • cs
    • os
  • Failing (make the /metrics hang):
    • cpu
    • iis
    • logical_disk
    • net
    • system

Question

Do you know a "common point" between the failing collectors, to help me further investigate and debug this situation ?

Wiki page for WMI <-> Prometheus conversion

As suggested by @brian-brazil, we should have a wiki page for how to map WMI types to Prometheus types. I made a start, and then tried to PR it, but it seems Github does not support wiki-PRs for some reason.
So, workaround: File an issue with the page markdown :) This is mostly a skeleton so far, I added the few types we use in logical_disk, but I expect we'll continually edit this as we make progress.

Prometheus and WMI CounterTypes

WMI performance counters hava many types, and those need to be mapped to Prometheus' types. Below is a listing of WMI types, the Prometheus type that should be used, and any processing that needs to be done.

This applies to WMI classes inheriting Win32_PerfRawData.

WMI type Prometheus type Processing
PERF_COUNTER_RAWCOUNT_HEX
PERF_COUNTER_LARGE_RAWCOUNT_HEX
PERF_COUNTER_TEXT
PERF_COUNTER_RAWCOUNT gauge
PERF_COUNTER_LARGE_RAWCOUNT
PERF_DOUBLE_RAW
PERF_COUNTER_DELTA
PERF_COUNTER_LARGE_DELTA
PERF_SAMPLE_COUNTER
PERF_COUNTER_QUEUELEN_TYPE
PERF_COUNTER_LARGE_QUEUELEN_TYPE
PERF_COUNTER_100NS_QUEUELEN_TYPE
PERF_COUNTER_OBJ_TIME_QUEUELEN_TYPE
PERF_COUNTER_COUNTER counter
PERF_COUNTER_BULK_COUNT counter
PERF_RAW_FRACTION
PERF_COUNTER_TIMER
PERF_PRECISION_SYSTEM_TIMER
PERF_100NSEC_TIMER
PERF_PRECISION_100NS_TIMER counter Normalize to seconds (value / 1e7)
PERF_OBJ_TIME_TIMER
PERF_PRECISION_OBJECT_TIMER
PERF_SAMPLE_FRACTION
PERF_COUNTER_TIMER_INV
PERF_100NSEC_TIMER_INV
PERF_COUNTER_MULTI_TIMER
PERF_100NSEC_MULTI_TIMER
PERF_COUNTER_MULTI_TIMER_INV
PERF_100NSEC_MULTI_TIMER_INV
PERF_AVERAGE_TIMER
PERF_ELAPSED_TIME
PERF_COUNTER_NODATA
PERF_AVERAGE_BULK
PERF_SAMPLE_BASE
PERF_AVERAGE_BASE
PERF_RAW_BASE
PERF_PRECISION_TIMESTAMP
PERF_LARGE_RAW_BASE
PERF_COUNTER_MULTI_BASE
PERF_COUNTER_HISTOGRAM_TYPE

Build failure

go get -u github.com/prometheus/common
go build .

# github.com/prometheus/common/expfmt
..\..\prometheus\common\expfmt\decode.go:92: cannot convert v.Name (type *string) to type model.LabelValue
..\..\prometheus\common\expfmt\decode.go:103: cannot convert l.Value (type *string) to type model.LabelValue
..\..\prometheus\common\expfmt\decode.go:106: cannot convert l.Name (type *string) to type model.LabelName
..\..\prometheus\common\expfmt\decode.go:171: invalid case io_prometheus_client.MetricType_COUNTER in switch on f.Type (mismatched types io_prometheus_client.MetricType and *io_prometheus_client.MetricType)
..\..\prometheus\common\expfmt\decode.go:173: invalid case io_prometheus_client.MetricType_GAUGE in switch on f.Type (mismatched types io_prometheus_client.MetricType and *io_prometheus_client.MetricType)
..\..\prometheus\common\expfmt\decode.go:175: invalid case io_prometheus_client.MetricType_SUMMARY in switch on f.Type (mismatched types io_prometheus_client.MetricType and *io_prometheus_client.MetricType)
..\..\prometheus\common\expfmt\decode.go:177: invalid case io_prometheus_client.MetricType_UNTYPED in switch on f.Type (mismatched types io_prometheus_client.MetricType and *io_prometheus_client.MetricType)
..\..\prometheus\common\expfmt\decode.go:179: invalid case io_prometheus_client.MetricType_HISTOGRAM in switch on f.Type (mismatched types io_prometheus_client.MetricType and *io_prometheus_client.MetricType)
..\..\prometheus\common\expfmt\decode.go:195: cannot convert p.Name (type *string) to type model.LabelName
..\..\prometheus\common\expfmt\decode.go:195: cannot convert p.Value (type *string) to type model.LabelValue
..\..\prometheus\common\expfmt\decode.go:195: too many errors

due to recent changes in prometheus. Cause of appveyor build failure.
Investigating..

ping @carlpett

Enhancement: IIS App Pools

It would be nice to expose IIS App Pool metrics that are available from Win32_PerfRawData_APPPOOLCountersProvider_APPPOOLWAS

I think this project's collector would be a pretty useful starting point, but I've never used Go or WMI before until looking into this, so I can't say for sure.

I will try and give this a go if I can get a build environment working for Go, but someone may or may not be able to add this quicker than I might.

On a related note, my original reason for looking into app pool metrics is that my main desire is to have the app pool as a label for sites exposed as metrics by the IIS portion of this exporter, but that's far beyond me I think. Is that data available at all?

Thanks for your hard work on this project. The metrics it already provides are great!

Add "percent processor time" to CPU collector

Currently, Percent Processor Time is in the Win32_PerfRawData_PerfOS_Processor struct but is not actually collected. Is that something that can be added? Judging by how the collector is laid out (I'm not super familiar with Go so I might be wrong) it looks to be a pretty simple addition. Was it just omitted by accident in #26 ?

Proper use of volumeBlacklistPattern?

I'm curious about the the proper use of volumeBlacklistPattern or examples when needing to exclude volumes that begin with "HarddiskVolume" (e.g. HarddiskVolume1, HarddiskVolume108, etc.) in the logical disk collector.

This is great work Martin.

memory leak?

I use the latest version and it's using more ram over time. Once it was at about 500 megabytes. I'm not sure if it's a leak. Maybe the memory is reserved but not really used.

Not sure which go_memstats_ I could check for that.

no msi

Hi Martin,

your wmi_exporter works fine. but after govendor build, I got only the wmi_exporter.exe. What must I do to get an wmi_exporter.msi. Or, can I config the behavair of wmi_exporter.exe via parameter, or config-file?

thx
Frank

Installer fails to download binaries due to broken links

The url to wix-binaries.zip periodically changes due to new releases of WiX. This currently breaks the installer build. I've contacted Rob Mensching (project lead) and asked about stable urls. If that cannot happen, we'll need to figure out some way to get the latest url.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.