Giter Club home page Giter Club logo

caddy-dynamicdns's People

Contributors

caiych avatar cobyge avatar dependabot[bot] avatar francislavoie avatar jm355 avatar lenke182 avatar leonghui avatar mholt avatar otaconix avatar pkoenig10 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

caddy-dynamicdns's Issues

Add HA support.

Example Scenario
Two caddy instances on different hosts with ips 11.11.11.11 and 11.11.11.12 . Need to insert DNS records for both the IPs (for dns load balancing).

Notes
I understand I might be opening a can of worms (multi instance coordination, distributed systems etc.). But thought of giving it a shot.

Doesn't Update Cloudflare dns

When caddy first starts, it updates the dns ONLY if all the zones have different IP's to the current IP. If say I change an IP of 1 zone in cloudflare dashboard it does not update it.
Also I kept my check interval at 1m, I can see in the logs it does try to check my current IP but never updates it UNLESS I restart caddy. Then it updates it ONCE and never again. So I have to keep restarting caddy when my ip changes. This is my caddy file

dynamic_dns {
                provider cloudflare {API KEY}
                domains {
                        mydomain.com @ test test1 test2
                }
                check_interval 1m
                ip_source simple_http https://icanhazip.com
                }
}

test.mydomain.com { 

My Logs:

{"level":"warn","ts":1632912662.507794,"logger":"dynamic_dns.ip_sources.simple_http","msg":"IPv6 lookup failed","endpoint":"https://icanhazip.com","error":"Get \"https://icanhazip.com\": dial tcp6 [2606:4700::6812:69c]:443: connect: no route to host"}
{"level":"warn","ts":1632912722.078304,"logger":"dynamic_dns.ip_sources.simple_http","msg":"IPv6 lookup failed","endpoint":"https://icanhazip.com","error":"Get \"https://icanhazip.com\": dial tcp6 [2606:4700::6812:69c]:443: connect: no route to host"}
{"level":"warn","ts":1632912782.1980016,"logger":"dynamic_dns.ip_sources.simple_http","msg":"IPv6 lookup failed","endpoint":"https://icanhazip.com","error":"Get \"https://icanhazip.com\": dial tcp6 [2606:4700::6812:79c]:443: connect: no route to host"}
{"level":"warn","ts":1632912842.0858293,"logger":"dynamic_dns.ip_sources.simple_http","msg":"IPv6 lookup failed","endpoint":"https://icanhazip.com","error":"Get \"https://icanhazip.com\": dial tcp6 [2606:4700::6812:69c]:443: connect: no route to host"}
{"level":"warn","ts":1632912902.5992603,"logger":"dynamic_dns.ip_sources.simple_http","msg":"IPv6 lookup failed","endpoint":"https://icanhazip.com","error":"Get \"https://icanhazip.com\": dial tcp6 [2606:4700::6812:69c]:443: connect: no route to host"}

Possible issue with "ip_source interface" with dual stack interfaces

Hello,

I have posted this in the caddy.community before opening this issue:
https://caddy.community/t/dynamic-dns-module-question-about-domain-configuration/22291/4

The gist of it is, when the ip_source interface has been set as IP source, it will read any assigned IP address from that interface.

IPv6 interfaces have a mandatory link local address (LLA), they may also have a unique local address (ULA), and also a global unicast address (GUA).

If that IP interface then has an additional IPv4 address in the RFC1918 space, e.g. 192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8, 127.0.0.0/8, it will also be read and processed.

As an example, take this interface:

hn1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: WAN (wan)
        options=80018<VLAN_MTU,VLAN_HWTAGGING,LINKSTATE>
        ether 00:15:5d:00:c9:8c
        inet 172.16.0.199 netmask 0xffffff00 broadcast 172.16.0.255
        inet6 fe80::215:5dff:fe00:c98c%hn1 prefixlen 64 scopeid 0x6
        inet6 2003:a:1704:63aa:215:5dff:fe00:c98c prefixlen 64 autoconf
        media: Ethernet autoselect (10Gbase-T <full-duplex>)
        status: active
        nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>

This is the Caddyfile configuration (just a test setup to trigger the logs):

{
        storage file_system {
                root /usr/local/etc/caddy
        }
        log {
                output net unixgram//var/caddy/var/run/log {
                }
                format json {
                        time_format rfc3339
                }
        }

        servers {
                trusted_proxies static 192.168.1.1/32
                log_credentials
        }

        dynamic_dns {
                provider cloudflare sfdfsdfsdfs
                domains {
                        example.net @
                }
                ip_source interface hn1
                check_interval 5m
                ttl 1h
        }

        import /usr/local/etc/caddy/caddy.d/*.global
}

example.net {
        abort
}

These are the emitted logs:

2024-01-11T21:05:46	Informational	caddy	"info","ts":"2024-01-11T21:05:46Z","logger":"dynamic_dns","msg":"updating DNS record","zone":"example.net","type":"AAAA","name":"@","value":"2003:a:1704:63aa:215:5dff:fe00:c98c","ttl":3600}	
2024-01-11T21:05:46	Informational	caddy	"info","ts":"2024-01-11T21:05:46Z","logger":"dynamic_dns","msg":"updating DNS record","zone":"example.net","type":"AAAA","name":"@","value":"fe80::215:5dff:fe00:c98c","ttl":3600}	
2024-01-11T21:05:46	Informational	caddy	"info","ts":"2024-01-11T21:05:46Z","logger":"dynamic_dns","msg":"updating DNS record","zone":"example.net","type":"A","name":"@","value":"172.16.0.199","ttl":3600}

Expected behavior:
IPv4 RFC1918 addresses and IPv6 non-GUA addresses should be ignored by default.

Logged behavior:
An AAAA record request for the IPv6 link local address (LLA) fe80::215:5dff:fe00:c98c and also an A record request for the IPv4 RFC1918 172.16.0.199 has been issued additionally to the GUA AAAA-Record request.

Suggestion:
Maybe there can be an additional option that will filter these events from happening. Like include_private_ips on and include_private_ips off, where off is the default since it would be the expected behavior.

Thank you a lot :)

Context canceled while creating new records when triggered by /load

I'm not exactly sure what's going on, but I'm running into an issue. First, a few facts:

  • caddy version: v2.7.4 h1:J8nisjdOxnYHXlorUKXY75Gr6iBfudfoGhrJ8t7/flI=
  • caddy list-modules --versions:
    [snip]
      Standard modules: 106
    
    dns.providers.hetzner v0.0.1
    dynamic_dns v0.0.0-20230908132045-920daa5a969f
    dynamic_dns.ip_sources.interface v0.0.0-20230908132045-920daa5a969f
    dynamic_dns.ip_sources.simple_http v0.0.0-20230908132045-920daa5a969f
    dynamic_dns.ip_sources.upnp v0.0.0-20230908132045-920daa5a969f
    
      Non-standard modules: 5
    
      Unknown modules: 0
    

I'm using caddy-docker-proxy, and everytime I add a new host to dynamicdns' config, caddy-docker-proxy calls the caddy API to load the new Caddyfile, which in turn triggers dynamicdns, and I end up with this in my logs:

{"level":"info","ts":1695234347.7528663,"logger":"docker-proxy","msg":"New Caddyfile","caddyfile":"redacted"}
{"level":"warn","ts":1695234347.7623093,"logger":"docker-proxy","msg":"Caddyfile to json warning","warn":"[Caddyfile:5: Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies]"}
{"level":"info","ts":1695234347.762509,"logger":"docker-proxy","msg":"Sending configuration to","server":"localhost"}
{"level":"info","ts":1695234347.7636597,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_ip":"127.0.0.1","remote_port":"45390","headers":{"Accept-Encoding":["gzip"],"Content-Length":["24508"],"Content-Type":["application/json"],"User-Agent":["Go-http-client/1.1"]}}
{"level":"info","ts":1695234347.7665772,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1695234347.767036,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1695234347.767064,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1695234347.7729192,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"info","ts":1695234347.7729392,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":1695234347.7729702,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"info","ts":1695234347.7729752,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["redacted"]}
{"level":"info","ts":1695234347.7729893,"logger":"http","msg":"servers shutting down with eternal grace period"}
{"level":"info","ts":1695234347.7730908,"logger":"dynamic_dns","msg":"updating DNS record","zone":"redacted","type":"A","name":"redacted","value":"redacted","ttl":300}
{"level":"error","ts":1695234347.773107,"logger":"dynamic_dns","msg":"failed setting DNS record(s) with new IP address(es)","zone":"redacted","error":"Get \"https://dns.hetzner.com/api/v1/zones?name=redacted\": context canceled"}
{"level":"info","ts":1695234347.7731194,"logger":"dynamic_dns","msg":"finished updating DNS","current_ips":["redacted"]}
{"level":"info","ts":1695234347.7742302,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1695234347.7742527,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1695234347.7743573,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
{"level":"info","ts":1695234347.783447,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}

Could it be that dynamicdns runs within the context of the server that is being shutdown or something like that?

Use own IP source

You mention that IP sources and DNS providers are modular. I'm trying to write my own IP Source (caddy-dynamicdns-cmd-source) but struggling to get it working:

When testing I get:

Error: adapting config using caddyfile: parsing caddyfile tokens for 'dynamic_dns': Caddyfile:7 - Error during parsing: getting module named 'dynamic_dns.ip_sources.command': module not registered: dynamic_dns.ip_sources.command

I guess I've to change something in caddy.ModuleInfo or caddy.RegisterModule, but I don't no what.

#39 (comment)

Did I misunderstand your comments on this? Or did I just configure it wrong?

PS.: I opened a new Issue for better discoverability.

Caddy stuck on building

Windows am64 + caddy-dns/cloudflare + mholt/caddy-dynamicdns

Just sits on building, ive waited half hour now

domain not found in DNS

On startup it says:
domain not found in DNS {"domain": "@"}
then says theres a different ip address even if there isnt.
If i set the dns to something other than the servers ip, it is correctly setting it, but seems to be triggering when it doesnt need to.

my config is below:

{	
	dynamic_dns {
		provider cloudflare {image below}

		domains {
			url.tk @
		}
		ip_source simple_http https://icanhazip.com
		ip_source simple_http https://api64.ipify.org
		check_interval 5m
		versions ipv4
		ttl 1h
	}
}

IMAGE

A url.tk public_ip Proxied Auto

feature to give fixed ip source or set ip source via env

i am using gcp and dynamically creating instances. the ip is already known to me and i can set it in a config file during startup script.

can there be an option so the ipsource doesn't lookup rather reads from a static file?

[Feature Request] Get IP from Fritz!Box

Hi,

I have a similiar Issue as #31.

My setup looks like this:
Fritz!Box -> VPN-GW -> Clients

To obtain the "ISP-IP" and not the "VPN-IP" I use DDClient with the following script:
https://github.com/ddclient/ddclient/blob/master/sample-get-ip-from-fritzbox

Now I would like to do the same with caddy.

I generated some working go code with ChatGPT, but I'm not sure how to integrate it.

package main

import (
	"encoding/xml"
	"fmt"
	"io/ioutil"
	"log"
	"net/http"
	"strings"
)

type SoapEnvelope struct {
	XMLName xml.Name `xml:"Envelope"`
	Body    SoapBody
}

type SoapBody struct {
	XMLName               xml.Name `xml:"Body"`
	GetExternalIPAddressResponse GetExternalIPAddressResponse
}

type GetExternalIPAddressResponse struct {
	XMLName              xml.Name `xml:"GetExternalIPAddressResponse"`
	NewExternalIPAddress string   `xml:"NewExternalIPAddress"`
}

func getExternalIPAddress(fritzBoxHostname string) (string, error) {
	xml := `<?xml version="1.0" encoding="utf-8"?>
	<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
		<s:Body>
			<u:GetExternalIPAddress xmlns:u="urn:schemas-upnp-org:service:WANIPConnection:1" />
		</s:Body>
	</s:Envelope>`

	url := fmt.Sprintf("http://%s:49000/igdupnp/control/WANIPConn1", fritzBoxHostname)

	req, err := http.NewRequest("POST", url, strings.NewReader(xml))
	if err != nil {
		return "", err
	}

	req.Header.Set("Content-Type", "text/xml; charset=\"utf-8\"")
	req.Header.Set("SOAPAction", "urn:schemas-upnp-org:service:WANIPConnection:1#GetExternalIPAddress")

	client := &http.Client{}
	resp, err := client.Do(req)
	if err != nil {
		return "", err
	}
	defer resp.Body.Close()

	bodyBytes, err := ioutil.ReadAll(resp.Body)
	if err != nil {
		return "", err
	}

	return string(bodyBytes), nil
}

func main() {
	fritzBoxHostname := "192.168.178.1" 

	respXml, err := getExternalIPAddress(fritzBoxHostname)
	if err != nil {
		log.Fatal(err)
	}

	var envelope SoapEnvelope
	err = xml.Unmarshal([]byte(respXml), &envelope)
	if err != nil {
		log.Fatal(err)
	}

	ip := envelope.Body.GetExternalIPAddressResponse.NewExternalIPAddress
	fmt.Println("External IP address:", ip)
}

Module not building with xcaddy

Recently tried to build Caddy with this module using xcaddy, and it gave a fatal error.
I'm using Ubuntu 22.04

chris@Pluto:~/docker-compose-files/caddy$ xcaddy build --with github.com/mholt/caddy-dynamicdns
2024/04/12 00:40:38 [INFO] Temporary folder: /tmp/buildenv_2024-04-12-0040.4123730380
2024/04/12 00:40:38 [INFO] Writing main module: /tmp/buildenv_2024-04-12-0040.4123730380/main.go
package main

import (
	caddycmd "github.com/caddyserver/caddy/v2/cmd"

	// plug in Caddy modules here
	_ "github.com/caddyserver/caddy/v2/modules/standard"
	_ "github.com/mholt/caddy-dynamicdns"
)

func main() {
	caddycmd.Main()
}
2024/04/12 00:40:38 [INFO] Initializing Go module
2024/04/12 00:40:38 [INFO] exec (timeout=0s): /usr/local/go/bin/go mod init caddy 
go: creating new go.mod: module caddy
go: to add module requirements and sums:
	go mod tidy
2024/04/12 00:40:38 [INFO] Pinning versions
2024/04/12 00:40:38 [INFO] exec (timeout=0s): /usr/local/go/bin/go get -d -v github.com/caddyserver/caddy/v2 
go: added github.com/beorn7/perks v1.0.1
go: added github.com/caddyserver/caddy/v2 v2.7.6
go: added github.com/caddyserver/certmagic v0.20.0
go: added github.com/cespare/xxhash/v2 v2.2.0
go: added github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572
go: added github.com/golang/protobuf v1.5.3
go: added github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1
go: added github.com/google/uuid v1.3.1
go: added github.com/klauspost/cpuid/v2 v2.2.5
go: added github.com/libdns/libdns v0.2.1
go: added github.com/matttproud/golang_protobuf_extensions v1.0.4
go: added github.com/mholt/acmez v1.2.0
go: added github.com/miekg/dns v1.1.55
go: added github.com/onsi/ginkgo/v2 v2.9.5
go: added github.com/prometheus/client_golang v1.15.1
go: added github.com/prometheus/client_model v0.4.0
go: added github.com/prometheus/common v0.42.0
go: added github.com/prometheus/procfs v0.9.0
go: added github.com/quic-go/qpack v0.4.0
go: added github.com/quic-go/qtls-go1-20 v0.4.1
go: added github.com/quic-go/quic-go v0.40.0
go: added github.com/zeebo/blake3 v0.2.3
go: added go.uber.org/mock v0.3.0
go: added go.uber.org/multierr v1.11.0
go: added go.uber.org/zap v1.25.0
go: added golang.org/x/crypto v0.14.0
go: added golang.org/x/exp v0.0.0-20230310171629-522b1b587ee0
go: added golang.org/x/mod v0.11.0
go: added golang.org/x/net v0.17.0
go: added golang.org/x/sys v0.14.0
go: added golang.org/x/term v0.13.0
go: added golang.org/x/text v0.13.0
go: added golang.org/x/tools v0.10.0
go: added google.golang.org/protobuf v1.31.0
2024/04/12 00:40:40 [INFO] exec (timeout=0s): /usr/local/go/bin/go get -d -v github.com/mholt/caddy-dynamicdns github.com/caddyserver/caddy/v2 
go: accepting indirect upgrade from github.com/quic-go/[email protected] to v0.42.0
go: accepting indirect upgrade from go.uber.org/[email protected] to v0.4.0
go: accepting indirect upgrade from golang.org/x/[email protected] to v0.20.0
go: accepting indirect upgrade from golang.org/x/[email protected] to v0.21.0
go: accepting indirect upgrade from golang.org/x/[email protected] to v0.17.0
go: accepting indirect upgrade from golang.org/x/[email protected] to v0.17.0
go: accepting indirect upgrade from golang.org/x/[email protected] to v0.14.0
go: downloading google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1
go: upgraded github.com/jackc/pgconn v1.14.0 => v1.14.3
go: upgraded github.com/jackc/pgproto3/v2 v2.3.2 => v2.3.3
go: upgraded github.com/jackc/pgx/v4 v4.18.0 => v4.18.2
go: added github.com/mholt/caddy-dynamicdns v0.0.0-20240411202905-a0a258134505
go: upgraded github.com/quic-go/quic-go v0.40.0 => v0.42.0
go: added gitlab.com/NebulousLabs/fastrand v0.0.0-20181126182046-603482d69e40
go: added gitlab.com/NebulousLabs/go-upnp v0.0.0-20211002182029-11da932010b6
go: upgraded go.uber.org/mock v0.3.0 => v0.4.0
go: upgraded golang.org/x/crypto v0.14.0 => v0.20.0
go: upgraded golang.org/x/net v0.17.0 => v0.21.0
go: upgraded golang.org/x/sys v0.14.0 => v0.17.0
go: upgraded golang.org/x/term v0.13.0 => v0.17.0
go: upgraded golang.org/x/text v0.13.0 => v0.14.0
2024/04/12 00:40:47 [INFO] exec (timeout=0s): /usr/local/go/bin/go get -d -v  
2024/04/12 00:40:50 [INFO] Build environment ready
2024/04/12 00:40:50 [INFO] Building Caddy
2024/04/12 00:40:50 [INFO] exec (timeout=0s): /usr/local/go/bin/go mod tidy -e 
2024/04/12 00:40:50 [INFO] exec (timeout=0s): /usr/local/go/bin/go build -o /home/chris/docker-compose-files/caddy/caddy -ldflags -w -s -trimpath 
# github.com/caddyserver/caddy/v2
/home/chris/go/pkg/mod/github.com/caddyserver/caddy/[email protected]/listeners.go:477:4: unknown field RequireAddressValidation in struct literal of type quic.Config
/home/chris/go/pkg/mod/github.com/caddyserver/caddy/[email protected]/listeners.go:516:4: unknown field RequireAddressValidation in struct literal of type quic.Config
2024/04/12 00:40:51 [INFO] Cleaning up temporary folder: /tmp/buildenv_2024-04-12-0040.4123730380
2024/04/12 00:40:51 [FATAL] exit status 1

[Feature request] Add an option to use address on specific interface directly.

In my case, the host running caddy has a complex network setting.
And all traffic outbound will be routed to TUN interface by default, that would lead to a wrong result when simple_http or upnp.
But physical interface has a dynamic address which is accessible from internet.
If there would be a option to use address on specific interface directly.

Variable for public ip

I use docker for my home lab. Does the module have the ability to use a variable ip to specify it in docker compose?

Update fails for Digital Ocean

Using the below configuration, I get the error in the log excerpt that follows. It seems somewhere in the flow, the record ID is not getting set and thus the strconv.Atoi error. In the end no records are created. Using the same provider to obtain the certificate for tls, the proper records are created, and the cert is obtained successfully.

Any help would be appreciated.

Thanks,
-- Ed

{
debug
    
    dynamic_dns {
                
                provider digitalocean <api_key> 
        
                
        ip_source simple_http https://icanhazip.com
        versions ipv4

        domains {
            example.com @ www
        }

    }
}


example.com {  
    tls internal
    
    import proxy_upstream
}
2023/11/09 17:14:52.606 DEBUG   dynamic_dns     beginning IP address check
2023/11/09 17:14:52.613 INFO    pki.ca.local    root certificate is already trusted by system   {"path": "storage:pki/authorities/local/root.crt"}
2023/11/09 17:14:52.613 INFO    http    enabling HTTP/3 listener        {"addr": ":443"}
2023/11/09 17:14:52.613 INFO    tls     cleaning storage unit   {"description": "FileStorage:/root/.local/share/caddy"}
2023/11/09 17:14:52.613 DEBUG   http    starting server loop    {"address": "[::]:443", "tls": true, "http3": true}
2023/11/09 17:14:52.613 INFO    http.log        server running  {"name": "srv0", "protocols": ["h1", "h2", "h3"]}
2023/11/09 17:14:52.613 DEBUG   http    starting server loop    {"address": "[::]:80", "tls": false, "http3": false}
2023/11/09 17:14:52.613 INFO    http.log        server running  {"name": "remaining_auto_https_redirects", "protocols": ["h1", "h2", "h3"]}
2023/11/09 17:14:52.613 INFO    http    enabling automatic TLS certificate management   {"domains": ["example.com"]}
2023/11/09 17:14:52.613 INFO    tls     finished cleaning storage units
2023/11/09 17:14:52.613 WARN    tls     stapling OCSP   {"error": "no OCSP stapling for [example.com]: no OCSP server specified in certificate", "identifiers": ["example.com"]}
2023/11/09 17:14:52.613 DEBUG   tls.cache       added certificate to cache      {"subjects": ["example.com"], "expiration": "2023/11/10 04:32:18.000", "managed": true, "issuer_key": "local", "hash": "288de7c42a45ffd59cc0d28623a9fd8a4e0282335bb95cfefa10848edc0483c0", "cache_size": 1, "cache_capacity": 10000}
2023/11/09 17:14:52.613 DEBUG   events  event   {"name": "cached_managed_cert", "id": "76f34dc5-098d-40d3-bf85-d7cf93cce0c2", "origin": "tls", "data": {"sans":["example.com"]}}
2023/11/09 17:14:52.614 INFO    autosaved config (load with --resume flag)      {"file": "/root/.config/caddy/autosave.json"}
2023/11/09 17:14:52.614 INFO    serving initial configuration
2023/11/09 17:14:53.046 DEBUG   dynamic_dns     found DNS record        {"type": "A", "name": "lighthouse.example.com.", "zone": "example.com", "value": "127.0.0.1"}
2023/11/09 17:14:53.046 DEBUG   dynamic_dns     found DNS record        {"type": "A", "name": "@.example.com.", "zone": "example.com", "value": "127.0.0.1"}
2023/11/09 17:14:53.047 INFO    dynamic_dns     domain not found in DNS {"domain": "example.com"}
2023/11/09 17:14:53.047 INFO    dynamic_dns     domain not found in DNS {"domain": "example.com"}
2023/11/09 17:14:53.047 INFO    dynamic_dns     domain not found in DNS {"domain": "www.example.com"}
2023/11/09 17:14:53.047 INFO    dynamic_dns     domain not found in DNS {"domain": "www.example.com"}
2023/11/09 17:14:53.047 DEBUG   dynamic_dns     looked up current IPs from DNS  {"lastIPs": {"example.com":{"A":[""],"AAAA":[""]},"www.example.com":{"A":[""],"AAAA":[""]}}}
2023/11/09 17:14:53.382 DEBUG   dynamic_dns.ip_sources.simple_http      lookup  {"type": "IPv4", "endpoint": "https://icanhazip.com", "ip": "127.0.0.1"}
2023/11/09 17:14:53.382 INFO    dynamic_dns     updating DNS record     {"zone": "example.com", "type": "A", "name": "@", "value": "127.0.0.1", "ttl": 0}
2023/11/09 17:14:53.382 INFO    dynamic_dns     updating DNS record     {"zone": "example.com", "type": "A", "name": "www", "value": "127.0.0.1", "ttl": 0}
2023/11/09 17:14:53.382 ERROR   dynamic_dns     failed setting DNS record(s) with new IP address(es)    {"zone": "example.com", "error": "strconv.Atoi: parsing \"\": invalid syntax"}
2023/11/09 17:14:53.382 INFO    dynamic_dns     finished updating DNS   {"current_ips": ["127.0.0.1"]}

Add ability to update additional A records.

Below is possible example..
Ideally, you can point to the primary domain with a CNAME, but you usually have the default root domain and then www.

{
    dynamic_dns {
        domain example.com
        record_list www,@,sub
	provider gandi {env.GANDI_API_TOKEN}
	check_interval 5m
	}
}

IPv6 lookup failed with api.ipify.org

Hi there,

I found the following warning while running this app with lucaslorentz/caddy-docker-proxy:

{"level":"warn","ts":1637216941.4177737,"logger":"dynamic_dns.ip_sources.simple_http","msg":"IPv6 lookup failed","endpoint":"https://api.ipify.org","error":"Get \"https://api.ipify.org\": dial tcp6: lookup api.ipify.org on 127.0.0.11:53: no such host"}

Indeed there are no AAAA records:

Non-authoritative answer:
api.ipify.org	canonical name = api.ipify.org.herokudns.com.
Name:	api.ipify.org.herokudns.com
Address: 3.220.57.224
Name:	api.ipify.org.herokudns.com
Address: 52.20.78.240
Name:	api.ipify.org.herokudns.com
Address: 3.232.242.170
Name:	api.ipify.org.herokudns.com
Address: 54.91.59.199

According to www.ipify.org, api.ipify.org will output IPv4 only. For both IPv4 and IPv6, we should use api64.ipify.org:

Oct 1, 2020 the A record for api6.ipify.org will be removed to make the subdomain only for IPv6 requests. For universal access please use api64.ipify.org.

Using multiple providers

Is it possible to use multiple providers to update different domains? I tried this config (simplified)

{
    dynamic_dns {
        provider cloudflare {$CLOUDFLARE_API_TOKEN}
        domains {
            main-domain.com
        }
    }

    dynamic_dns {
        provider duckdns {$DUCKDNS_TOKEN}
        domains {
            secondary-domain.com
        }
    }
}

but it seems like only the second dynamic_dns block (for DuckDNS) is functional; main-domain.com is not updated at all.

dns provider hetzner

Currently I am fiddling with Caddy.
With this plugin ~3 ddns container could be obsolete.

I tested with dns provider hetzner and experienced that the A records are duplicated rather than updated.
Maybe an Option would be possible to disable the creation of records but only update them.

This would mean to add the record manually and the update to happen automatically.
And it prevents errournous duplicate entries.

Happy to provide further details. A minimal setup is difficult sadly, as it is only possible with token.

I applied the following ddns details:
domain.xyz @ ddns
domaein2.xyz @ root

Both domains got processed with 2 entries each, but ended up having duplicates within hetzner dns.

Thanks!
Gandalf

Cloudflare record update fails

Hi.
I am trying to use your module to update my DNS records with caddy but it fails. Here are the logs:

"level":"info","ts":1650453680.2739289,"logger":"dynamic_dns","msg":"updating DNS record","zone":"mydomain.fr","type":"A","name":"test","value":"myip","ttl":1800}
{"level":"error","ts":1650453681.071767,"logger":"dynamic_dns","msg":"failed setting DNS record(s) with new IP address(es)","zone":"mydomain.fr","error":"got error status: HTTP 400: [{Code:6003 Message:Invalid request headers}]"}

Here is the global section of my caddyfile :

{
debug
dynamic_dns {
provider cloudflare {env.CLOUDFLARE_API_TOKEN}
domains {
mydomain.fr test
}
}
}
Otherwise I succeed to get my TLS certificates updated (ACME DNS challenge) from Cloudflare. I suppose it may be a credential issue.
Could you please help?

Thank you.

Feature Request - Add ability to either request IPv4, IPv6 or both.. (Or disable IPv6)

My logs are constantly showing attempts to request IPv6. If we know that we don't have IPv6 or our ISP doesn't support it, it would be nice to either disable it, or via some parameter, only request IPv4.

{"level":"warn","ts":1614694441.669056,"logger":"dynamic_dns.ip_sources.simple_http","msg":"IPv4 lookup failed","endpoint":"https://api.ipify.org","error":"https://api.ipify.org: server response was: 503 503 Service Unavailable"}
{"level":"warn","ts":1614694441.67404,"logger":"dynamic_dns.ip_sources.simple_http","msg":"IPv6 lookup failed","endpoint":"https://api.ipify.org","error":"Get \"https://api.ipify.org\": dial tcp6: address api.ipify.org: no suitable address found"}
{"level":"warn","ts":1614694441.8593335,"logger":"dynamic_dns.ip_sources.simple_http","msg":"IPv6 lookup failed","endpoint":"https://myip.addr.space","error":"Get \"https://myip.addr.space\": dial tcp6 [2607:5300:203:d0:1::d089]:443: connect: cannot assign requested address"}
{"level":"warn","ts":1614694741.6912205,"logger":"dynamic_dns.ip_sources.simple_http","msg":"IPv6 lookup failed","endpoint":"https://api.ipify.org","error":"Get \"https://api.ipify.org\": dial tcp6: address api.ipify.org: no suitable address found"}

[Porkbun] DNS Entries are not overwritten, they are appended

I'm using dynamic_dns with provider porkbun the A records are created, but instead of overwriting the old IP with with the new one, the old entrie is left untouched and a new one is created.

DNS-Entries
image

# ./caddy run --envfile /opt/CaddyV2/.env
2023/09/02 05:31:42.858	INFO	using adjacent Caddyfile
2023/09/02 05:31:42.997	INFO	admin	admin endpoint started	{"address": "localhost:2019", "enforce_origin": false, "origins": ["//127.0.0.1:2019", "//localhost:2019", "//[::1]:2019"]}
2023/09/02 05:31:43.001	INFO	tls.cache.maintenance	started background certificate maintenance	{"cache": "0x870094600"}
2023/09/02 05:31:43.002	INFO	http.auto_https	server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS	{"server_name": "srv0", "https_port": 443}
2023/09/02 05:31:43.006	INFO	http.auto_https	enabling automatic HTTP->HTTPS redirects	{"server_name": "srv0"}
2023/09/02 05:31:43.043	INFO	tls	cleaning storage unit	{"description": "FileStorage:/opt/CaddyV2/data"}
2023/09/02 05:31:43.043	INFO	http	enabling HTTP/3 listener	{"addr": ":443"}
2023/09/02 05:31:43.047	INFO	http.log	server running	{"name": "srv0", "protocols": ["h1", "h2", "h3"]}
2023/09/02 05:31:43.049	INFO	http.log	server running	{"name": "remaining_auto_https_redirects", "protocols": ["h1", "h2", "h3"]}
2023/09/02 05:31:43.049	INFO	http	enabling automatic TLS certificate management	{"domains": ["home.mietzen.xyz", "*.home.mietzen.xyz"]}
2023/09/02 05:31:43.060	INFO	tls	finished cleaning storage units
2023/09/02 05:31:48.787	INFO	autosaved config (load with --resume flag)	{"file": "/root/.config/caddy/autosave.json"}
2023/09/02 05:31:48.791	INFO	serving initial configuration
2023/09/02 05:31:55.162	INFO	dynamic_dns	domain not found in DNS	{"domain": "wireguard.home.mietzen.xyz"}
2023/09/02 05:31:55.163	INFO	dynamic_dns	domain not found in DNS	{"domain": "wireguard.home.mietzen.xyz"}
2023/09/02 05:31:55.163	INFO	dynamic_dns	domain not found in DNS	{"domain": "home-assistant.home.mietzen.xyz"}
2023/09/02 05:31:55.163	INFO	dynamic_dns	domain not found in DNS	{"domain": "home-assistant.home.mietzen.xyz"}
2023/09/02 05:31:55.163	INFO	dynamic_dns	domain not found in DNS	{"domain": "vaultwarden.home.mietzen.xyz"}
2023/09/02 05:31:55.163	INFO	dynamic_dns	domain not found in DNS	{"domain": "vaultwarden.home.mietzen.xyz"}
2023/09/02 05:31:55.227	INFO	dynamic_dns	updating DNS record	{"zone": "mietzen.xyz", "type": "A", "name": "wireguard.home", "value": "xxx.xxx.xxxx.172", "ttl": 3600}
2023/09/02 05:31:55.227	INFO	dynamic_dns	updating DNS record	{"zone": "mietzen.xyz", "type": "A", "name": "home-assistant.home", "value": "xxx.xxx.xxxx.172", "ttl": 3600}
2023/09/02 05:31:55.228	INFO	dynamic_dns	updating DNS record	{"zone": "mietzen.xyz", "type": "A", "name": "vaultwarden.home", "value": "xxx.xxx.xxxx.172", "ttl": 3600}
2023/09/02 05:31:57.973	INFO	dynamic_dns	finished updating DNS	{"current_ips": ["xxx.xxx.xxxx.172"]}
{
	storage file_system {
		root /opt/CaddyV2/data
	}
	log caddy {
		output file /opt/CaddyV2/data/logs/caddy.log {
			roll_size 10MiB
			roll_local_time
			roll_keep 5
			roll_keep_for 336h
		}
		format console {
			time_local
			time_format wall
		}
		level INFO
	}
	email [email protected]
	dynamic_dns {
		provider porkbun {
			api_key {env.PORKBUN_API_KEY}
			api_secret_key {env.PORKBUN_API_SECRET_KEY}
		}
		domains {
			mietzen.xyz wireguard.home
			mietzen.xyz home-assistant.home
			mietzen.xyz vaultwarden.home
		}
		versions ipv4
		ip_source command /opt/CaddyV2/fritzbox_ext_ip 192.168.178.1
		check_interval 5m
		ttl 1h
	}
}
# ./caddy version
v2.7.4 h1:J8nisjdOxnYHXlorUKXY75Gr6iBfudfoGhrJ8t7/flI=
# /opt/CaddyV2/caddy list-modules
admin.api.load
admin.api.metrics
admin.api.pki
admin.api.reverse_proxy
caddy.adapters.caddyfile
caddy.config_loaders.http
caddy.listeners.http_redirect
caddy.listeners.proxy_protocol
caddy.listeners.tls
caddy.logging.encoders.console
caddy.logging.encoders.filter
caddy.logging.encoders.filter.cookie
caddy.logging.encoders.filter.delete
caddy.logging.encoders.filter.hash
caddy.logging.encoders.filter.ip_mask
caddy.logging.encoders.filter.query
caddy.logging.encoders.filter.regexp
caddy.logging.encoders.filter.rename
caddy.logging.encoders.filter.replace
caddy.logging.encoders.json
caddy.logging.writers.discard
caddy.logging.writers.file
caddy.logging.writers.net
caddy.logging.writers.stderr
caddy.logging.writers.stdout
caddy.storage.file_system
events
http
http.authentication.hashes.bcrypt
http.authentication.hashes.scrypt
http.authentication.providers.http_basic
http.encoders.gzip
http.encoders.zstd
http.handlers.acme_server
http.handlers.authentication
http.handlers.copy_response
http.handlers.copy_response_headers
http.handlers.encode
http.handlers.error
http.handlers.file_server
http.handlers.headers
http.handlers.invoke
http.handlers.map
http.handlers.metrics
http.handlers.push
http.handlers.request_body
http.handlers.reverse_proxy
http.handlers.rewrite
http.handlers.static_response
http.handlers.subroute
http.handlers.templates
http.handlers.tracing
http.handlers.vars
http.ip_sources.static
http.matchers.client_ip
http.matchers.expression
http.matchers.file
http.matchers.header
http.matchers.header_regexp
http.matchers.host
http.matchers.method
http.matchers.not
http.matchers.path
http.matchers.path_regexp
http.matchers.protocol
http.matchers.query
http.matchers.remote_ip
http.matchers.vars
http.matchers.vars_regexp
http.precompressed.br
http.precompressed.gzip
http.precompressed.zstd
http.reverse_proxy.selection_policies.client_ip_hash
http.reverse_proxy.selection_policies.cookie
http.reverse_proxy.selection_policies.first
http.reverse_proxy.selection_policies.header
http.reverse_proxy.selection_policies.ip_hash
http.reverse_proxy.selection_policies.least_conn
http.reverse_proxy.selection_policies.query
http.reverse_proxy.selection_policies.random
http.reverse_proxy.selection_policies.random_choose
http.reverse_proxy.selection_policies.round_robin
http.reverse_proxy.selection_policies.uri_hash
http.reverse_proxy.selection_policies.weighted_round_robin
http.reverse_proxy.transport.fastcgi
http.reverse_proxy.transport.http
http.reverse_proxy.upstreams.a
http.reverse_proxy.upstreams.multi
http.reverse_proxy.upstreams.srv
pki
tls
tls.certificates.automate
tls.certificates.load_files
tls.certificates.load_folders
tls.certificates.load_pem
tls.certificates.load_storage
tls.client_auth.leaf
tls.get_certificate.http
tls.get_certificate.tailscale
tls.handshake_match.remote_ip
tls.handshake_match.sni
tls.issuance.acme
tls.issuance.internal
tls.issuance.zerossl
tls.stek.distributed
tls.stek.standard

  Standard modules: 106

dns.providers.cloudflare
dns.providers.porkbun
dynamic_dns
dynamic_dns.ip_sources.command
dynamic_dns.ip_sources.interface
dynamic_dns.ip_sources.simple_http
dynamic_dns.ip_sources.upnp

  Non-standard modules: 7

  Unknown modules: 0

OS: OPNsense 23.7.3-amd64

do not update IPv6

config

{
dynamic_dns {
		provider cloudflare <API_TOKEN>
		domains {
			domain.com sub-1 sub-2
		}
		ip_source upnp
		ip_source simple_http https://icanhazip.com
		ip_source simple_http https://api64.ipify.org
		check_interval 2m
		versions ipv4 ipv6
		ttl 1m
	}
}

log

image

CheckInterval is the same as TTL

In the README, it is implied that check_interval is used for setting how often to check the external IP.
There is an unexpected side effect though, as the same value is used as the TTL of the record.
I believe this is unexpected (and unwanted) behavior, and also prevents putting low intervals (Anything lower than 5 minutes is rejected by Cloudflare).

I think there should be a different value for setting the TTL, as opposed to the checking interval.

failed setting DNS record(s) with new IP address(es) - expected 1 zone, got 0 for [external.domain.tld]

hey. i am currently having caddy proxy a few internal closed domains on local.domain.tld

now i wanted to open some services to the internet so i don't have to install a VPN on every device.

however, i can't get the DynDNS Module to rewrite the Domain.

the Error is the one in the Title;

dynamic_dns","msg":"failed setting DNS record(s) with new IP address(es)","zone":"[domain redacted]","error":"expected 1 zone, got 0 for [domain redacted]"}

does anyone have an idea what the Problem could be?

Feature request: option to only update existing records, not create new.

I have only A record on ipv4.example.com and only AAAA record on ipv6.example.com but when I use this module, it creates the missing AAAA and A records for the subdomains which I didn't want.

Maybe something like update_only option which would just update existing records without creating new?

Use netip package?

The new net/netip package has more efficient types and useful methods. This isn't exactly a hot-path package but I do wonder if it'd be worth it to move to netip before we wait much longer?

panic: assignment to entry in nil map

May 10 22:21:01 server2 caddy[1379103]: {"level":"info","ts":1683750061.5673218,"logger":"dynamic_dns","msg":"domain not found in DNS","domain":"dyn"}
May 10 22:21:01 server2 caddy[1379103]: {"level":"info","ts":1683750061.567349,"logger":"dynamic_dns","msg":"domain not found in DNS","domain":"dyn"}
May 10 22:21:02 server2 caddy[1379103]: {"level":"info","ts":1683750062.661033,"logger":"dynamic_dns","msg":"updating DNS record","zone":"~~~","type":"A","name":"~~~","value":"~~~","ttl":0}
May 10 22:21:02 server2 caddy[1379103]: {"level":"info","ts":1683750062.6610668,"logger":"dynamic_dns","msg":"updating DNS record","zone":"~~~","type":"AAAA","name":"~~~","value":"~~~","ttl":0}
May 10 22:21:06 server2 caddy[1379103]: panic: assignment to entry in nil map
May 10 22:21:06 server2 caddy[1379103]:
May 10 22:21:06 server2 caddy[1379103]: goroutine 49 [running]:
May 10 22:21:06 server2 caddy[1379103]: github.com/mholt/caddy-dynamicdns.App.checkIPAndUpdateDNS({{0x0, 0x0, 0x0}, {0x0, 0x0, 0x0}, 0xc00021d5c0, 0x0, 0x0, {0x0, ...}, ...})
May 10 22:21:06 server2 caddy[1379103]:         github.com/mholt/[email protected]/dynamicdns.go:254 +0xab1
May 10 22:21:06 server2 caddy[1379103]: github.com/mholt/caddy-dynamicdns.App.checkerLoop({{0x0, 0x0, 0x0}, {0x0, 0x0, 0x0}, 0xc00021d5c0, 0x0, 0x0, {0x0, ...}, ...})
May 10 22:21:06 server2 caddy[1379103]:         github.com/mholt/[email protected]/dynamicdns.go:150 +0xb8
May 10 22:21:06 server2 caddy[1379103]: created by github.com/mholt/caddy-dynamicdns.App.Start
May 10 22:21:06 server2 caddy[1379103]:         github.com/mholt/[email protected]/dynamicdns.go:135 +0xaa

Route53 Lookup Fails

I am constantly getting the following with Route53 using this plugin:

{"level":"error","ts":1643296784.4517176,"logger":"dynamic_dns","msg":"unable to lookup current IPs from DNS records","error":"HostedZoneNotFound: No zones found for the domain XXX.tm"}
{"level":"error","ts":1643296803.113794,"logger":"dynamic_dns","msg":"failed setting DNS record(s) with new IP address(es)","zone":"XXX.tm","error":"HostedZoneNotFound: No zones found for the domain XXX.tm"}

This is my IAM policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Action": [
                "route53:ListResourceRecordSets",
                "route53:GetChange",
                "route53:ChangeResourceRecordSets"
            ],
            "Resource": [
                "arn:aws:route53:::hostedzone/XXX",
                "arn:aws:route53:::change/*"
            ]
        },
        {
            "Sid": "",
            "Effect": "Allow",
            "Action": [
                "route53:ListHostedZonesByName",
                "route53:ListHostedZones"
            ],
            "Resource": "*"
        }
    ]
}

This setup works fine for DNS challenges, (I think), but not for Route53.

Is this because I need a less restrictive policy?

NOTE, FWIW this domain is on the ".tm" TLD.

Caddyfile Support

Could u please give some clue about how to configure this app using Caddyfile.

dynamic_dns is thowing error - server response was: 503 503 Service Unavailable"}

I've configured the dynamic_dns module in the Global options block in my Caddyfile.

When I restart the caddy docker container, it works without issue.. Then subsequest refreshes generate errors.

Using Caddy 2.4.0 beta1.. or close to it. This is my Dockerfile.

FROM caddy:builder AS builder

RUN xcaddy build bafb562991598df703a744e13cbc06472e71349e \
    --with github.com/mholt/caddy-dynamicdns \
    --with github.com/caddy-dns/gandi

FROM caddy:latest

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

And this is my Caddyfile that handles the dns update.

{
        dynamic_dns {
                domain www.site.com
                provider gandi <APIKEY>
                check_interval 5m
        }
        acme_dns gandi <APIKEY>
        email [email protected]
}

This is the log of the dns update and failure.

{"level":"error","ts":1614279644.7509358,"logger":"dynamic_dns","msg":"checking IP address","error":"server response was: 503 503 Service Unavailable"}
{"level":"info","ts":1614280082.1940167,"logger":"dynamic_dns","msg":"IP address changed","last_ip":"<nil>","new_ip":"XX.XX.XX.XX"}
{"level":"info","ts":1614280082.1940715,"logger":"dynamic_dns","msg":"updating DNS record","type":"A","name":"www.site.com","value":"XX.XX.XX.XX","ttl":300}
{"level":"info","ts":1614280082.9069586,"logger":"dynamic_dns","msg":"finished updating DNS"}
{"level":"error","ts":1614296010.809118,"logger":"dynamic_dns","msg":"checking IP address","error":"server response was: 503 503 Service Unavailable"}
{"level":"error","ts":1614298080.8029675,"logger":"dynamic_dns","msg":"checking IP address","error":"server response was: 503 503 Service Unavailable"}
{"level":"error","ts":1614299280.7873015,"logger":"dynamic_dns","msg":"checking IP address","error":"server response was: 503 503 Service Unavailable"}
{"level":"error","ts":1614299580.7971187,"logger":"dynamic_dns","msg":"checking IP address","error":"server response was: 503 503 Service Unavailable"}

Bug: Will not update when one domain is outdated but another is up-to-date

If you intend to update two domains (say A.com and B.com), and A.com is correct, but B.com is outdated or incorrect, this utility will still not update B.com at startup
I believe this to be caused by the fact that the lastIPs table is filled from the current DNS domains, and will at that point contain both correct and outdated IP, and then upon receiving a response from one of the IP providers it will consider it to not be renewed as it is already contained in lastIPs.

Support dynamically generated domains

Imagine a Caddyfile like this:

{
	dynamic_dns {
		provider cloudflare {env.CLOUDFLARE_API_TOKEN}
		domains {
			example.com on_demand
		}
		check_interval 5m
	}
}

yo.example.com {
  dynamic_dns
  reverse_proxy 127.0.0.1:10000
}

hi.example.com {
  dynamic_dns
  file_server
}

And this middleware could pick up the domains has dynamic_dns in its config and start to config dns for them. This removes the need of updating the top level block whenever a new domain is added.

I can also look into the implementation later once we are agree on the direction.

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.