Giter Club home page Giter Club logo

puppet-splunk's People

Contributors

chutzimir avatar jorritfolmer avatar jsushetski avatar larsfuehrer avatar nemega avatar sickbock avatar timidri avatar tragiccode avatar vidkun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

puppet-splunk's Issues

Getting the password hash

What approach do you currently take to get the hashed password of users that you put in the passwd configuration file?

Just upgraded to Splunk 7 - issues with certs

I haven't had time to dig into this but after upgrading to 7, I'm having issues with
ERROR TcpOutputFd - Read error. Connection reset by peer
on all the forwarders and
ERROR TcpInputProc - Error encountered for connection from src=<ip>:39570. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol
on all the indexers.

Right before upgrading, I implemented indexer discovery, but besides that nothing in the module changed between version 6 and upgrading to 7. Did I miss something?

License server issues

Hi,
first things first: Thanks for this very useful puppet module! :-)
I am having a very hard time figuring out how to get my license server and slaves to trust each other. This is what I see in splunkd.log on my license master (which is also the indexer cluster master):

10-30-2020 19:32:38.391 +0100 ERROR LMMasterRestHandler - path=/masterlm/usage: Signature mismatch between license slave=172.27.42.1 and this License Master. Please make sure that the pass4SymmKey setting in server.conf, under [general], is the same for the License Master and all its slaves from ip=172.27.42.1

My configuration looks similar to example 4 in README.md.

This is the configuration all my splunk machines get:

splunk::service:
    ensure:  'running'
    enable:  'true'
splunk::admin:
    hash:  '$6$tRxxxxxxsecret'
    fn:    'Splunk Admin'
    email: '[email protected]'
splunk::lm: 'cluster-and-license-master-fqdn:8089'

This is what my cluster master gets:

splunk::httpport:         8000
splunk::tcpout:           'indexer_discovery'
splunk::clustering:
    mode:                     'master'
    indexer_discovery:        'true'
    replication_factor:       2
    search_factor:            2
    site_replication_factor:  'origin:1,total:2'
    site_search_factor:       'origin:1,total:2'
    thissite:                 'site1'
    available_sites:          'site1,site2'
    pass4symmkey:             'plaintextsecret'

This is what my cluster indexers get:

splunk::clustering:
    thissite: 'site1'
    forwarder_site_failover: 'site1:site2'
splunk::inputport:        9997
splunk::httpport:         8000
splunk::replication_port: 8080
splunk::clustering:
    mode: 'slave'
    cm: 'cluster-and-license-master-fqdn:8089'
    pass4symmkey: 'plaintextsecret'

I set merge settings for splunk::clustering like this because indexer configuration is kept in two different hiera levels:

    splunk::clustering:
        merge: deep

I am running out of ideas how to debug this further. I can see that from time to time on puppetruns that pass4SymmKey is set back to changeme or plaintextsecret. I am afraid I don't understand closely enough when plaintextsecret is exchanged with a real hashed key. Does splunk do that or does the puppet module do that?

Maybe you could point me in a direction?
Thanks again for this module! Configuring my splunk setup without it would be impossible I guess! :-)

all the best
Jojo

Managing inputs.conf with puppet?

Hi Jorrit,

Thanks for the module - I am trying to use it to deploy universal forwarders. We don't yet have a deployment server available, so we need to manage inputs.conf with puppet. The only approach I see working as the module is now is using augeas but that quickly can get quite messy. Is it possible to allow management of inputs.conf in some other way, for instance using a custom content snippet?

typo and missing parameter

in the init.pp you have a typo on line 37
$ds_intermediate = $splunk::params::ds_intemediate,
should be
$ds_intermediate = $splunk::params::ds_intermediate,

following parameter is missing in params.pp causing errors
splunk::params::ecdhcurvename_intermediate

variable in certs/s2s.pp is not declared line 7
$use_certs = $splunk::use_certs,

add search-server command fails

line 20 of addsearchpeers.pp is missing (http|https):// for the -host option

current:
command => "splunk add search-server -host ${title}....

Errors in the following way:
Notice: /Stage[main]/Splunk/Splunk::Addsearchpeers[idx1.localdomain:9997]/Exec[splunk add search-server idx1.localdomain:9997]/returns: Could not look up HOME variable. Auth tokens cannot be cached.
Notice: /Stage[main]/Splunk/Splunk::Addsearchpeers[idx1.localdomain:9997]/Exec[splunk add search-server idx1.localdomain:9997]/returns:
Notice: /Stage[main]/Splunk/Splunk::Addsearchpeers[idx1.localdomain:9997]/Exec[splunk add search-server idx1.localdomain:9997]/returns: An error occurred:
Notice: /Stage[main]/Splunk/Splunk::Addsearchpeers[idx1.localdomain:9997]/Exec[splunk add search-server idx1.localdomain:9997]/returns: Error while sending public key to search peer: Connection closed by peer
Error: 'splunk add search-server -host idx1.localdomain:9997 -auth admin:changemeagain -remoteUsername admin -remotePassword changemeagain && touch /opt/splunk/etc/auth/distServerKeys/idx1.localdomain:9997.done' returned 24 instead of one of [0]

Per Splunk docs, the command should should contain the http:// or https:// :
http://docs.splunk.com/Documentation/Splunk/7.2.0/DistSearch/Configuredistributedsearch#Use_the_CLI
splunk add search-server https://192.168.1.1:8089 -auth admin:password -remoteUsername admin -remotePassword passremote

Admin Pass Param

Hi,

Im having some difficulty in declaring the admin param. Im using Puppet Enterprise 2018 and hiera. I initially tried defining in hiera with the following:

`---
classes:

  • apt-test
  • splunk

splunk::httpport: 8000
splunk::kvstoreport: 8191
splunk::inputport: 9997
splunk::package_source: "splunk"
splunk::admin: {"pass": "changemeagain"}`

Then tried adding in the Puppet console under the node classification.

Class: Splunk
Parameter: Admin
Value: {"pass": "changemeagain"}

Neither seem to work, I get an error saying no users exist when I try to login for the first time.I cant see any errors or warnings in the puppet output on the agent.

Any assistance on this would be greatly appreciated.

Cheers

Future: make use of "indexer discovery"

Hi,
I love your module, it's great easy and future reach.

I'd like to recommend switching from "direct peer node" to "indexer discovery", it will greatly reduce maintenance both code and UF node, see link below.

http://docs.splunk.com/Documentation/Splunk/6.6.2/Indexer/indexerdiscovery#Configure_indexer_discovery_with_SSL

Also, it would be useful if your instructions would contain a heavy forwarder example.

Thanks for the great module and tanks for sharing!

Is there reasoning behind NOT restarting the splunk deamon when the configuration changes?

I've got about 2800 servers one which I'm going to be installing Splunk. The initial roll-out will have those servers pointing to a beta-test environment and then to our real production environment.

If I change the splunk::ds parameter, the splunk daemon won't automatically restart. The README.md makes note of this behavior.

What is the reasoning behind not restarting the splunk daemon?

reuse_puppet_certs does not use puppet certs

Using the latest 3.11.0 release and leaving reuse_puppet_certs at default is resulting in Splunk using the default Splunk-provided cert instead of using the certs from Puppet CA. At least Splunk web is using that default Splunk cert. Is this expected behavior or am I overlooking something?

Steps to reproduce:

  • provision new RHEL host and hook to Puppet
  • use Puppet to install Splunk 7.0.3 with reuse_puppet_certs left at default value

Expected Behavior: Splunk web presents a certificate from our internal Puppet CA.

Actual Behavior: It presents the default Splunk certificate from SplunkCommonCA.

Splunk_home parameter not found causes puppet run to fail

Setting splunk_home parameter results in the following error during puppet run:

Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Resource Statement, Class[Splunk]: has no parameter named 'splunk_home' at /etc/puppetlabs/code/environments...

Removing the splunk_home parameter results in a successful puppet run again.

upgrading splunk using the version field on the top of an earlier managed version

Issue:
When using the version parameter as a mean to upgrade to a newer version of splunk. The install completes but Splunk will not automatically start and puppet reports will contain errors.

Puppet installs the new version Splunk but it will not start resulting errors in the host report.

Work around: (Tested on CentOS on simple all-in-one Splunk install)
Run "systemctl stop puppet" to temporarily stop puppet from running

Run "./$SPLUNKHOME/splunk/bin/splunk version"
This will prompt you to accept the general terms and conditions of the new version, then prompt you to perform a upgrade/migration. Allow this to complete.

Run "./$SPLUNKHOME/splunk/bin/splunk version" again you should be presented with the updated version as expected.

Run "systemctl start puppet" to restart the Puppet agent.
Run "puppet agent -t" (assuming it hasn't already started)

Puppet will perform some remaining actions and start Splunk

Splunk should now be accessible and you should not see any more errors in the host report.

Any previous data (custom indexes and data) should still exist and be accessible.

@jorritfolmer - Great module btw.

Indexer Discovery - Cluster Master doesn't have cm specified which causes issues with forwarding from CM to indexers

I recently enabled indexer discovery and came across this problem.

My CM doesn't have cm defined under clustering so this is what the outputs.conf on the CM looks like:

~/etc/apps $ cat puppet_common_ssl_outputs/local/outputs.conf
[tcpout]
defaultGroup = cluster

[tcpout:cluster]
indexerDiscovery = cluster
sslCertPath = /opt/splunk/etc/auth/certs/s2s.pem
sslRootCAPath = /opt/splunk/etc/auth/certs/ca.crt
useACK = false

[indexer_discovery:cluster]
pass4SymmKey = <redacted>
master_uri = https://

Notice the empty master_uri.

My CM wasn't forwarding events (or storing them locally because that was disabled), until I added this to system\local\outputs.conf

[indexer_discovery:cluster]
master_uri = https://clustermasterurl:8089

Then logs started flowing. I don't think this is clarified in the code, but it's definitely not accounted for in the indexer discovery example. Should I have assumed I need to set cm under clustering for the Cluster Master, since it is also a forwarder after all?

Cannot add searchpeers parameter in Puppet Enterprise

I am attempting to set up a Search Head but i cam getting the following error when i update the splunk module within Puppet Enterprise classification. I am using two peers but i have also tested it with just one..with the same error. Can you assist?

2018-04-0415:34 Z err Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Operator '[]' is not applicable to an Undef Value. at /etc/puppetlabs/code/environments/production/modules/splunk/manifests/addsearchpeers.pp:11:18 at /etc/puppetlabs/code/environments/production/modules/splunk/manifests/init.pp:189 on node Source:

SHC Member and Puppet Enabled

We are looking to move to Search Head Clustering and are managing a number of other services on the SHC members with puppet and are required to do so by internal security policy. Currently the "disable puppet" forever option using this module for SHC is a pain and we'd like to continue using this module since we're using it for indexers, licensing, and have been using it for stand-alone search heads.

So is the intent for the module to stay as is, or did the work get abandoned part way through? If its the former, then we'll need to look at alternatives. If it is the latter, then we'll look at whether we can put together a PR. We just need to figure out where to put our effort.

Thanks!

Puppet cert locations on RHEL

I have a fairly minor issue. I'm going to clone to make some quick updates, but in short: The Red Hat provided puppet packages place the puppet certs under /var/lib/puppet/ssl instead of /etc/puppet/ssl.

My plan is to clone and update the cert location in master/manifests/certs/s2s.pp based on the $osfamily fact. I'd also be open, if desired, to adding a cert location parameter in params.pp to make changeable if the certs get moved again.

-Chuck

"Don't know how to install" Error when attempting to install Splunk UF on Windows Server

Splunk::Installed class has following error: Error: Could not find command '/opt/splunkforwarder/bin/splunk' Error: /Stage[main]/Splunk::Installed/Exec[splunk initial run]/returns: change from 'notrun' to ['0'] failed: Could not find command '/opt/splunkforwarder/bin/splunk'

Trying to install the Splunk UF using the Splunk::Installed class and I am running into the following error:

Error: Could not find command '/opt/splunkforwarder/bin/splunk'
Error: /Stage[main]/Splunk::Installed/Exec[splunk initial run]/returns: change from 'notrun' to ['0'] failed: Could not find command '/opt/splunkforwarder/bin/splunk'

can you assist? I am using puppet enterprise.

Search Head Clustering

Very nice job with this module, I am currently using it to bootstrap our new Splunk environment. Having some issues with our search head cluster, which is your example 7.

I have sucessfully appled the SHC node configurations to a staging server and captured the puppet_* apps.
I have configured a deployer.
However, I am not clear on the best way to bootstrap new SHC nodes, since there is a chicken/egg problem with configuring them to join the cluster before pushing configurations.

Could you clarify the steps to take on the SHC nodes/slaves?

Thanks!

Convert module to utilize PDK

In order to get a PDK badge and also keep up-to-date with the latest development standards for puppet modules it would be best to convert this module into a pdk compatible one. If you have no problems with this i can start on the work for this.

Option to disable management of Splunk service

We want to manage the Splunk Forwarder service ourselves using a systemd service/unit file (Splunk does not provide one with the rpm as of 7.0.1), but there is currently no way to do this because the module doesn't allow use of a systemd service as long as the status, start and stop parameters override it. The service resource in the module can't be disabled either.

status => "${splunk_home}/bin/splunk status",
start  => "${splunk_home}/bin/splunk start",
stop   => "${splunk_home}/bin/splunk stop",

A simple $manage_service variable would fix this.

splunk-launch.conf splunk_os_user: Could not evaluate: Saving failed

Hi,
I'm getting the below errors with puppet code:

class { 'splunk':
httpport => 8000,
kvstoreport => 8191,
inputport => 9997,
reuse_puppet_certs => false,
sslcertpath => 'server.pem',
sslrootcapath => 'cacert.pem',
}

Any idea why ? Splunk version is 6.6.1.

Below are the errors I'm seeing.

Debug: Augeas/opt/splunk/etc/splunk-launch.conf splunk_os_user: Opening augeas with root /, lens path , flags 64
Debug: Augeas/opt/splunk/etc/splunk-launch.conf splunk_os_user: Augeas version 1.4.0 is installed
Debug: Augeas/opt/splunk/etc/splunk-launch.conf splunk_os_user: Will attempt to save and only run if files changed
Debug: Augeas/opt/splunk/etc/splunk-launch.conf splunk_os_user: sending command 'set' with params ["/files/opt/splunk/etc/splunk-launch.conf/SPLUNK_OS_USER", "splunk"]
Debug: Augeas/opt/splunk/etc/splunk-launch.conf splunk_os_user: Closed the augeas connection
Error: /Stage[main]/Splunk::Splunk_launch/Augeas[/opt/splunk/etc/splunk-launch.conf splunk_os_user]: Could not evaluate: Saving failed, see debug
Debug: Augeas/opt/splunk/etc/splunk-launch.conf splunk_bindip: Opening augeas with root /, lens path , flags 64
Debug: Augeas/opt/splunk/etc/splunk-launch.conf splunk_bindip: Augeas version 1.4.0 is installed
Debug: Augeas/opt/splunk/etc/splunk-launch.conf splunk_bindip: Will attempt to save and only run if files changed
Debug: Augeas/opt/splunk/etc/splunk-launch.conf splunk_bindip: sending command 'rm' with params ["/files/opt/splunk/etc/splunk-launch.conf/SPLUNK_BINDIP"]
Debug: Augeas/opt/splunk/etc/splunk-launch.conf splunk_bindip: Skipping because no files were changed
Debug: Augeas/opt/splunk/etc/splunk-launch.conf splunk_bindip: Closed the augeas connection
Debug: Class[Splunk::Splunk_launch]: Resource is being skipped, unscheduling all events
Debug: Class[Splunk::Deploymentclient]: Resource is being skipped, unscheduling all events
Notice: /Stage[main]/Splunk::Deploymentclient/File[/opt/splunk/etc/apps/puppet_common_deploymentclient_base]: Dependency Augeas[/opt/splunk/etc/splunk-launch.conf splunk_os_user] has failures: true
Warning: /Stage[main]/Splunk::Deploymentclient/File[/opt/splunk/etc/apps/puppet_common_deploymentclient_base]: Skipping because of failed dependencies

Debug: /Stage[main]/Splunk::Deploymentclient/File[/opt/splunk/etc/apps/puppet_common_deploymentclient_base]: Resource is being skipped, unscheduling all events
Debug: Class[Splunk::Deploymentclient]: Resource is being skipped, unscheduling all events
Debug: Class[Splunk::Distsearch]: Resource is being skipped, unscheduling all events
Debug: Class[Splunk::Distsearch]: Resource is being skipped, unscheduling all events
Debug: Class[Splunk::Passwd]: Resource is being skipped, unscheduling all events
Debug: Class[Splunk::Passwd]: Resource is being skipped, unscheduling all events
Debug: Class[Splunk::Authentication]: Resource is being skipped, unscheduling all events
Debug: Class[Splunk::Authentication]: Resource is being skipped, unscheduling all events
Debug: Class[Splunk::Service]: Resource is being skipped, unscheduling all events
Notice: /Stage[main]/Splunk::Service/Service[splunk]: Dependency Augeas[/opt/splunk/etc/splunk-launch.conf splunk_os_user] has failures: true
Warning: /Stage[main]/Splunk::Service/Service[splunk]: Skipping because of failed dependencies

Debug: /Stage[main]/Splunk::Service/Service[splunk]: Resource is being skipped, unscheduling all events

server.conf template spacing causes changes after Splunk restart

The spacing around the = for enableSplunkdSSL in puppet-splunk/templates/puppet_common_ssl_base/local/server.conf is causing a change/update to the file on every Puppet run following a restart of Splunk.

Restarting Splunk causes Splunk to cleanup the server.conf with spacing around the = for each setting. The next Puppet run detects it as a change and reverts it back to missing the spacing.

If you've implemented an exec to restart Splunk when this conf file changes, it will force a Splunk restart on every Puppet run. Thus, causing an endless loop of Splunk restarts on every Puppet run.

Universal forwarder is not using puppet signed ceritificate

Hi
I've noticed the below error in a UF:

07-06-2017 12:53:03.818 +0000 WARN X509Verify - X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) should not be used, as it is issued by Splunk's own default Certificate Authority (CA). This puts your Splunk instance at very high-risk of the MITM attack. Either commercial-CA-signed or self-CA-signed certificates must be used; see: http://docs.splunk.com/Documentation/Splunk/latest/Security/Howtoself-signcertificates

Is this a bug or it's not implemented by default ?
This is my code for UF:

class { 'splunk':
type => 'uf',
ds => $master,
pass4symmkey => $pass4symmkey,
tcpout => [ $index1, $index2, $index3 ],
admin => {
hash => $admin_hash,
fn => 'Forwarder Administrator',
email => $email,
},
service => {
ensure => running,
enable => true,
}
}

indexer_discovery:cluster miss configuration for UF and HF

It seems that while using index discover functionality in the file:
/opt/splunkforwarder/etc/apps/puppet_common_ssl_outputs/local/outputs.conf
it's adding the VM's hostname insted of the cluster master's one.
It does this for both UF and HF

Example:
[indexer_discovery:cluster]
pass4SymmKey = somerandomkey
master_uri = https://uf.local:8089 # hostname should have been master.local instead.

class { 'splunk':
type => 'uf',
ds => $splunk_cm01,
pass4symmkey => $splunk_pass4symmkey,
tcpout => 'indexer_discovery',
admin => {
hash => $splunk_admin_hash,
fn => 'Forwarder Administrator',
email => $splunk_email,
},
service => {
ensure => running,
enable => true,
}

Add support for package_source also for Linux

Currently package_source parameter works only for Windows.

When I've tried to install splunk on RHEL 3.10.0-514.26.2.el7.x86_64 it has produced this error:

Error: Could not update: Execution of '/bin/yum -d 0 -e 0 -y install splunk' returned 1: Error: Nothing to do
Error: /Stage[main]/Splunk::Installed/Package[splunk]/ensure: change from purged to present failed: Could not update: Execution of '/bin/yum -d 0 -e 0 -y install splunk' returned 1: Error: Nothing to do
Notice: /Stage[main]/Splunk::Installed/Exec[splunk enable boot-start etcetera]: Dependency Package[splunk] has failures: true

I've got the same error when I've provided splunk version

  class { 'splunk':
    version     => '6.6.2-4b804538c686',
    httpport    => 8000,
    kvstoreport => 8191,
    inputport   => 9997,
  }

Error: Could not update: Execution of '/bin/yum -d 0 -e 0 -y install splunk' returned 1: Error: Nothing to do
Error: /Stage[main]/Splunk::Installed/Package[splunk]/ensure: change from purged to 6.6.2 failed: Could not update: Execution of '/bin/yum -d 0 -e 0 -y install splunk-6.6.2' returned 1: Error: Nothing to do
Notice: /Stage[main]/Splunk::Installed/Exec[splunk enable boot-start etcetera]: Dependency Package[splunk] has failures: true

I've only managed to install splunk from the command line by providing a full URL:

yum install install https://download.splunk.com/products/splunk/releases/6.6.2/linux/splunk-6.6.2-4b804538c686-linux-2.6-x86_64.rpm

Is it possible to add support for package_source also for Linux, or add a condition not to fail the puppet run in case I have installed splunk manually?

  package { 'splunk-yum':
   provider => 'yum',
   name     => 'https://download.splunk.com/products/splunk/releases/6.6.2/linux/splunk-6.6.2-4b804538c686-linux-2.6-x86_64.rpm',
   ensure   => present,
  
  class { 'splunk':
    skip_installation     => true,
    httpport    => 8000,
    kvstoreport => 8191,
    inputport   => 9997,
  }

Splunk service restart loop after updating from puppet 3 to 4

This started happening on all our forwarders after updating recently....I can see what's happening but am having a hard to figuring out how to stop it. The value is getting changed to the default, uf gets restarted and the value gets encrypted, puppet see that it doesn't match and resets it back to default, service is restarted and value gets encrypted again...rinse and repeat. I must be missing something obvious....
Notice: /Stage[main]/Splunk::Server::General/File[/opt/splunkforwarder/etc/apps/puppet_common_pass4symmkey_base/local/server.conf]/content: content changed '{md5}e03e7073e0cf85c462adac7e2d997ba2' to '{md5}ed056538b79d38db26e26824097b5888' Notice: /Stage[main]/Splunk::Service/Service[splunk]: Triggered 'refresh' from 1 events

Add Beaker acceptance tests that use docker in travis-ci

I would like to add some basic beaker acceptance tests for this module that will spinup splunk in docker containers and make assertions to catch bugs/issues/regressions with this module. Let me know how this sounds and i can start some work on this and get your input.

Splunk user directory permissions not working

Ok so the splunk uf is installed but the directory permissions need to be at least 755 for the splunk user to be able to start the service. Can you update the module to set the /opt/splunkforwarder directory to 755 for the splunk user?

I tested it on linux and it works but it would be great if the module set it when it installs it automatically.

Pass4symmkey created on UF install

The pass4symmkey seems like it should just be used for sh clustering. However it gets added to UF only installs. This causes unnecessary changes to occur if splunk restarts, then puppet runs. It changes the encrypted password back to 'changeme'. Tried digging around in the code to fix it, but it keeps getting added. If I comment if out in params.pp it fixes the issue, but the file still gets created, just w/o the password.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.