dragonflyoss / dragonfly Goto Github PK
View Code? Open in Web Editor NEWThis repository has be archived and moved to the new repository https://github.com/dragonflyoss/Dragonfly2.
Home Page: https://d7y.io
License: Apache License 2.0
This repository has be archived and moved to the new repository https://github.com/dragonflyoss/Dragonfly2.
Home Page: https://d7y.io
License: Apache License 2.0
种子文件和文件的元信息都是存储到本地了么;
种子文件是不是应该持久化到数据库中呢?
从代码里没看到数据库相关的操作。
server {
listen 8001;
location / {
root /home/admin/supernode/repo;
}
}
上面这段配置什么用途?root路径该如何写?另外,能不能提供中文文档呢?谢谢
In the Usage
part, I didn't see any configuration between df-daemon and supernode, but how can I download images from supernode?
In the architecture, when I pull images from df-daemon, df-daemon will try to connect the supernode(or the cluster manager, BTW, why give a different name?) and then supernode will connect to the docker registry.
But in practice, df-daemon will connect the registry directly(from logs and source code). Use the same port I specified in docker pull
.
Steps:
docker run --rm -ti -p 8001:8001 -p 8002:8002 dragonfly:supernode
but I don't know what's for../df-daemon -verbose -registry https://r.kfd.me
(BTW, why can't I use port lower than 2000? In https://github.com/alibaba/Dragonfly/blob/e42364eb8f1fb7a16295dcc5a4c3f2e7a5c6d468/src/daemon/src/df-daemon/initializer/initializer.go#L199)docker pull r.kfd.me:65001/alpine
in another host which point r.kfd.me to the server.Can you give a elaborate tutorial for setting up a docker-proxy? Or, it's a miss using that dragonfly can not be a docker proxy(just a mirror for now)?
服务器端用的端口是:80, 8001,8002 ? 还有么?
客户端用什么端口?
都是TCP协议么?
如果我的客户端分布在互联网上,设置防火墙需要开放哪些端口?
客户端如果是位于路由器防火墙后面,而且没有公网, 端口映射要做哪些? 本身是否支持NAT Traversal?
不是 uPnP,PMP这种路由器性质的穿越。
支持p2p传输的最大文件是多大?
RT,我在本地运行supernode,报了如下错误:
十一月 29, 2017 5:21:28 下午 org.apache.coyote.AbstractProtocol init
信息: Initializing ProtocolHandler ["http-bio-8080"]
十一月 29, 2017 5:21:28 下午 org.apache.catalina.core.StandardService startInternal
信息: Starting service Tomcat
十一月 29, 2017 5:21:28 下午 org.apache.catalina.core.StandardEngine startInternal
信息: Starting Servlet Engine: Apache Tomcat/7.0.37
十一月 29, 2017 5:21:32 下午 org.apache.catalina.core.ApplicationContext log
信息: No Spring WebApplicationInitializer types detected on classpath
十一月 29, 2017 5:21:32 下午 org.apache.catalina.core.ApplicationContext log
信息: Initializing Spring root WebApplicationContext
十一月 29, 2017 5:21:32 下午 org.springframework.web.context.ContextLoader initWebApplicationContext
信息: Root WebApplicationContext: initialization started
十一月 29, 2017 5:21:33 下午 org.springframework.web.context.support.XmlWebApplicationContext prepareRefresh
信息: Refreshing Root WebApplicationContext: startup date [Wed Nov 29 17:21:33 CST 2017]; root of context hierarchy
十一月 29, 2017 5:21:33 下午 org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
信息: Loading XML bean definitions from class path resource [application-context.xml]
十一月 29, 2017 5:21:34 下午 org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor
信息: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
十一月 29, 2017 5:21:35 下午 org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler initialize
信息: Initializing ExecutorService 'scheduler'
十一月 29, 2017 5:21:35 下午 org.springframework.web.context.support.XmlWebApplicationContext postProcessAfterInitialization
信息: Bean 'scheduler' of type [class org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
十一月 29, 2017 5:21:35 下午 org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
信息: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@39641cb4: defining beans [scheduler,org.springframework.context.annotation.internalAsyncAnnotationProcessor,org.springframework.context.annotation.internalScheduledAnnotationProcessor,monitorService,netConfigNotification,powerRateLimiter,peerRepository,peerTaskRepository,progressRepository,taskRepository,cacheDetectorImpl,linkPositiveGc,cdnManagerImpl,cdnReporter,commonPeerDispatcher,fileMetaDataService,peerRegistryService,peerService,peerTaskService,progressService,taskService,lockService,dataGcService,diskSpaceGcTimer,downSpaceCleaner,org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,utilBeanPool,org.springframework.context.annotation.ConfigurationClassPostProcessor.importAwareProcessor,org.springframework.scheduling.annotation.SchedulingConfiguration]; root of factory hierarchy
Disconnected from the target VM, address: '127.0.0.1:52916', transport: 'socket'
Process finished with exit code 1
网上查了半天,没找到解决办法,求大神指点下这个报错如何解决。
when docker pull, there are some errors as follows:
root@iZ2ze7tbgo9lk7jubiy6tcZ:~# df-daemon -registry https://docker.acmcoder.com -port 18001
launch df-daemon on port:18001
2017/12/12 18:21:28 http: proxy error: context canceled
2017/12/12 18:21:32 http: proxy error: context canceled
2017/12/12 18:21:34 http: proxy error: context canceled
2017/12/12 18:21:35 http: proxy error: context canceled
2017/12/12 18:21:36 http: proxy error: context canceled
2017/12/12 18:21:36 http: proxy error: dial tcp: i/o timeout
2017/12/12 18:21:37 http: proxy error: context canceled
2017/12/12 18:21:38 http: proxy error: context canceled
Killed
root@iZ2ze7tbgo9lk7jubiy6tcZ:~# curl https://docker.acmcoder.com
<!doctype html>
Always get error like this:
[2018-05-11 18:48:26,944] ERROR sign:59678-1526035671.512 lineno:81 : register to node:192.168.21.23 error
Traceback (most recent call last):
File "/usr/local/df-client/component/httputil.py", line 67, in register
schema, node), data=params, timeout=(2.0, 5.0))
File "/usr/local/df-client/vendor/requests-2.18.4-py2.7.egg/requests/sessions.py", line 555, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/usr/local/df-client/vendor/requests-2.18.4-py2.7.egg/requests/sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/df-client/vendor/requests-2.18.4-py2.7.egg/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/usr/local/df-client/vendor/requests-2.18.4-py2.7.egg/requests/adapters.py", line 521, in send
raise ReadTimeout(e, request=request)
ReadTimeout: HTTPConnectionPool(host='10.58.83.14', port=8002): Read timed out. (read timeout=5.0)
[2018-05-11 18:48:26,944] ERROR sign:59678-1526035671.512 lineno:243 : p2p fail
Traceback (most recent call last):
File "/usr/local/df-client/core/fetcher.py", line 230, in run
self.identifier)
File "/usr/local/df-client/component/httputil.py", line 126, in pull_piece_task
identifier)
File "/usr/local/df-client/component/httputil.py", line 86, in register
raise Exception("register result:%s" % result)
Exception: register result:None
I configure as the guide . when I use docker pull , example docker pull test/alpine:latest,default, it Pulling repository docker.io/test/alpine .
How can I config docker?
整个工程看下来,产生如下几点想法
我相信应该是有更多、更好的内容会放出来,但目前的情况,确实不敢恭维~
按照文档install_server.md安装;
按照docker方式部署,dockerfile里面的一些url已经失效,比如jdk等;
是否可以更新下install_server.md 以及dockerfile;
蜻蜓有默认限速20MB,怎么改成无限制,求指导。
从客户端代码中没有找到相关的功能,请问是每次下载中断后,都需要重新下载整个文件吗
sorry, i am looking for chinese document about this subject, but i can not find it , maybe somewhere i neglected.
so , as i know ,and i was an alibaba R&D before,most of alibaba staff are chinese,i think it's easy to provide a zh-cn document, pls.
I have a docker registry: docker.acmcoder.com. There are several public and private images in it.
I want to use dragonfly for image distribution.
After deploy, the public images can be pulled. but private images can not.
df-daemon:
df-daemon --registry https://docker.acmcoder.com -port 18001
dockerd
root@iZ2ze7tbgo9lk7jubiy6tcZ:~# cat /etc/docker/daemon.json
{
"registry-mirrors": ["http://localhost:18001"]
}
root@iZ2ze7tbgo9lk7jubiy6tcZ:~# docker pull acmcoder/ubuntu:12days
Error response from daemon: repository acmcoder/ubuntu not found: does not exist or no pull access
root@iZ2ze7tbgo9lk7jubiy6tcZ:~# docker pull docker.acmcoder.com/acmcoder/ubuntu:12days
12days: Pulling from acmcoder/ubuntu
054be6183d06: Pull complete
779578d7ea6e: Pull complete
82315138c8bd: Pull complete
88dc0000f5c4: Pull complete
79f59e52a355: Pull complete
a0b39032ccd2: Pull complete
46c56af51e55: Pull complete
1ee150f4e54b: Pull complete
ead87fbf844e: Pull complete
baa460bf3866: Pull complete
0dd037bb5aaa: Pull complete
Digest: sha256:3b1428d54bc875ff75eb8f0379516cbd04bb5e09b37024db4ac3bff574b88c7d
Status: Downloaded newer image for docker.acmcoder.com/acmcoder/ubuntu:12days
dockerd log:
ERRO[2258] Not continuing with pull after error: errors:
denied: requested access to the resource is denied
unauthorized: authentication required
INFO[2258] Ignoring extra error returned from registry: unauthorized: authentication required
INFO[2258] Translating "denied: requested access to the resource is denied" to "repository acmcoder/ubuntu not found: does not exist or no pull access"
ERRO[2258] Handler for POST /v1.26/images/create returned error: repository acmcoder/ubuntu not found: does not exist or no pull access
After I use docker login localhost:18001
It still can't work.
Docker info &CentOS 7.4 X64
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 45
Server Version: 1.12.6`
When I run
./build/build.sh supernode
Errors
[INFO] --- docker-maven-plugin:1.0.0:build (default-cli) @ supernode ---
[INFO] Using authentication suppliers: [ConfigFileRegistryAuthSupplier]
[INFO] Copying /usr/src/Dragonfly/src/supernode/target/supernode.jar -> /usr/src/Dragonfly/src/supernode/target/docker/supernode.jar
[INFO] Copying /usr/src/Dragonfly/src/supernode/src/main/docker/sources/nginx.conf -> /usr/src/Dragonfly/src/supernode/target/docker/sources/nginx.conf
[INFO] Copying /usr/src/Dragonfly/src/supernode/src/main/docker/sources/start.sh -> /usr/src/Dragonfly/src/supernode/target/docker/sources/start.sh
[INFO] Copying /usr/src/Dragonfly/src/supernode/src/main/docker/Dockerfile -> /usr/src/Dragonfly/src/supernode/target/docker/Dockerfile
[INFO] Building image supernode:0.2.0
Step 1 : FROM busybox:latest as SRC
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 13.605 s
[INFO] Finished at: 2018-07-02T18:45:53+08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.spotify:docker-maven-plugin:1.0.0:build (default-cli) on project supernode: Exception caught: Error parsing reference: "busybox:latest as SRC" is not a valid repository/tag -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
BUILD(supernode): FAILURE
请问dfget支持windows平台吗?客户端有特别依赖linux特性的逻辑吗?谢谢
CI is very important to ensure PR satisfies some basic condition:
Is there a plan to add this?
镜像地址如下: /docker/registry/v2/blobs/sha256/5b/5b4eafb626e8c8741318829cd66edceae73cf35ac06ec890fe0c83d2832 ,进行镜像下载的时候,没有通过蜻蜓来下载。
docker 选项配置
--insecure-registry harbor-noah.test.com
--registry-mirror http://127.0.0.1:65001
登录生成认证文件
[root@harborC ~]# docker login harbor-noah.test.com
Username: registry
Password:
Login Succeeded
启动代理
[root@harborC df-client]# ./df-daemon -registry http://harbor-noah.test.com
launch df-daemon on port:65001
尝试pull,失败
[root@harborC ~]# docker pull archplatform/janus-server-cluster-staging.test.com:1.5.1_71_d5bc523e1f546008cd8075c1f6d93a69e3b7d3b5
Error response from daemon: repository archplatform/janus-server-cluster-staging.vip.vip.com not found: does not exist or no pull access
直接从harbor 中pull成功
[root@harborC ~]# docker pull harbor-noah.test.com/archplatform/janus-server-cluster-staging.test.com:1.5.1_71_d5bc523e1f546008cd8075c1f6d93a69e3b7d3b5
1.5.1_71_d5bc523e1f546008cd8075c1f6d93a69e3b7d3b5: Pulling from archplatform/janus-server-cluster-staging.test.com
Digest: sha256:2e2384cfacddd6f73595054e7f1ea870a19af47072c7d6d0ce7c3463b12811a9
看了下df-daemon代码,里面确实也没有考虑带上认证参数去请求镜像仓库
containerd 1.0 is released about a week ago. And containerd 1.0 pulls container images by itself, not via dockerd any more. In all dockerd released, they all use containerd 0.2.x, and image management is on dockerd, not containerd. But now, things change.
While I think dragonfly can help docker/pouch to implement p2p image distribution. For the change of image pulling workload bearer, does dragonfly plan to support image pulling from containerd 1.0?
I am afraid there may be some change in the agent proxy on each node. But not testedt yet.
Can you simply make the images available in docker hub instead of asking people to build it?
1,磁盘IO频率控制和磁盘空间检查,这个具体是怎么实现的,在客户端代码中的可控参数中并没有找到这类控制参数
2,全局和局部限速这个可以简单的说明一下区别吗
3,白名单功能具体指实现了什么样的功能
感谢
你好,我是一名学生,在github上看到阿里巴巴开源的文件分发系统的源码draonfly。部署后控制学校的20台机器,查看下载的时间,初次下载的时间较均匀,我所了解到P2P下载会在其他机器上分享下载和共享下载资源,但是之后每台机器下载的时间是递增的,请问一下您该如何修正这个问题呢?不胜感激!
参考服务端以容器的方式部署,在vm上执行时,以下几段是会报错,需要以其他方式实现,物理机上部署无问题;经验证在vm中如果目录下存在文件在docker build时执行rm -rf 确实会出现报错
RUN mv $(find /usr/java/ -maxdepth 1 -name "jdk*" -type d) /usr/java/jdk
&& rm -rf $CATALINA_HOME/webapps/*
&& rm -rf /tmp/supernode-sourcecode
如下:新建一个目录随便放个文件,使用rm -rf也是不能删除的在vm(vmware的虚拟机)
系统信息:
cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
Step 8 : RUN mkdir /usr/local/test-aaaa && touch /usr/local/test-aaaa/a.txt
---> Using cache
---> ce6708e01a5d
Step 9 : RUN rm -rf /usr/local/test-aaaa
---> Running in e8176466c46f
rm: cannot remove '/usr/local/test-aaaa': Directory not empty
The command '/bin/sh -c rm -rf /usr/local/test-aaaa' returned a non-zero code: 1
The new structure of docs
are:
```
docs
├── images
├── template
├── en
├── zh
.
.
.
└── other-languages
```
And the images
directory is shared by all languages version.
目前dfget可以控制下载速度,那么supervernode节点去源站下载的时候请问这个带宽有没有办法控制呢。
[root@localhost system]# docker pull 127.0.0.1:65001/zzy/centos6.5:base
base: Pulling from zzy/centos6.5
d3971421e51c: Pulling fs layer
a3ed95caeb02: Pulling fs layer
83102a73c967: Pulling fs layer
a8b5b2dbc018: Waiting
a6aabf5d6519: Waiting
2471aff69e5c: Waiting
e4372b6e0478: Waiting
572e418c3275: Waiting
6d2cd982fb24: Waiting
403fb5b41e50: Waiting
c5f6731c52c0: Waiting
ba634aff20cd: Waiting
844254443901: Waiting
96dc9a1c052f: Waiting
4a3d7732cfe3: Waiting
2018/03/20 06:06:20 http: proxy error: dfget fail(exit status 1):exit status 1
error pulling image configuration: received unexpected HTTP status: 502 Bad Gateway
[root@localhost system]# 2018/03/20 06:06:20 http: proxy error: dfget fail(exit status 1):exit status 1
2018/03/20 06:06:20 http: proxy error: dfget fail(exit status 1):exit status 1
2018/03/20 06:06:22 http: proxy error: dfget fail(exit status 1):exit status 1
2018/03/20 06:06:24 http: proxy error: dfget fail(exit status 1):exit status 1
目前只提供了客户端带宽的限制功能,但是没有连接数的限制功能,会不会有可能导致连接数过高呢,还是说代码中有默认做了限制。我们比较关心连接数是否可控。
I could not build that image following the installation the docs provided. Some links were broken.
Maybe you can add some docker CI for it or just build the dragonfly
and put it in alibaba's registry.
Thanks.
Using GoLang to rewrite the dfget
in order to reduce the costs of multilingual development&maintenance, and resolve several problems of python-version dfget
.
TASKS LIST:
./deamon/
dfdaemon
dfget
dfget
: version control, tests, CLI, exception, register, fetcher, server, rate limiter, net/file managerdfget
dfget
dfget
Can you please:
a. Provide a link (obvious enough in README) and link to a try out page
b. Consolidate the documents and provide a single page of try out
c. Provide end to end instruction on how to try out. "End to end" meaning trying out sending and receiving a sample file.
I got a vary strange case
I pull image with Dragonfly image proxy, found that one blob shows downloaded and send file finish, but the dfget process still alive.
root 3900 3654 39 17:49 ? 00:00:16 /home/tops/bin/python2.7 /home/staragent/plugins/dragonfly/dist/dfget -u http://storage.docker.aliyun-inc.com/docker/registry/v2/blobs/sha256/b8/b827031a99cbd0afcd52063c9756d8924f8864bbf6bbc4f1b1a753c0d55e3073/data?Expires=1516874991&OSSAccessKeyId=LTAIfYaNrksx0ktL&Signature=ZkuMgK6%2Bmt7wJcDKYx7ddMlRTpE%3D -o /home/docker/tmp/dragon-fly/85863339/b827031a99cbd0afcd52063c9756d8924f8864bbf6bbc4f1b1a753c0d55e3073 -f Signature&Expires&OSSAccessKeyId -s 95M --totallimit 95M --callsystem ali_scheduling_docker --notbs -------〉54分下完的
root 4014 1 0 17:49 ? 00:00:00 /home/tops/bin/python2.7 /home/staragent/plugins/dragonfly/dist/dfget -u http://storage.docker.aliyun-inc.com/docker/registry/v2/blobs/sha256/aa/aa2c4e8c127ad2ca616a949a649b278029b0d3b9ff90b9adc8d440c4fc4b760b/data?Expires=1516874991&OSSAccessKeyId=LTAIfYaNrksx0ktL&Signature=HTyYwE5sDKUmumjKSzemjQEihw4%3D -o /home/docker/tmp/dragon-fly/75435239/aa2c4e8c127ad2ca616a949a649b278029b0d3b9ff90b9adc8d440c4fc4b760b -f Signature&Expires&OSSAccessKeyId -s 95M --totallimit 95M --callsystem ali_scheduling_docker --notbs ---〉49分显示下载完成,但是进程还在
the blob b827031a99cbd0afcd52063c9756d8924f8864bbf6bbc4f1b1a753c0d55e3073, finish at 2018-01-25T17:54:51
time="2018-01-25T17:54:51+08:00" level=info msg="send file finished [/home/docker/tmp/dragon-fly/85863339/b827031a99cbd0afcd52063c9756d8924f8864bbf6bbc4f1b1a753c0d55e3073] [1]"
the blob aa2c4e8c127ad2ca616a949a649b278029b0d3b9ff90b9adc8d440c4fc4b760b shows that finish at 2018-01-25T17:49:52, but I can see the dfget process even this dfget process download blob b827031a99cbd0afcd52063c9756d8924f8864bbf6bbc4f1b1a753c0d55e3073 is finish
time="2018-01-25T17:49:52+08:00" level=info msg="send file finished [/home/docker/tmp/dragon-fly/75435239/aa2c4e8c127ad2ca616a949a649b278029b0d3b9ff90b9adc8d440c4fc4b760b] [1]"
您好,我是华中科技大学的一名研究生,我在github上看到了阿里巴巴开源的文件分发系统的源码draonfly,该系统中提到提供主机级别限速的功能,我对这方面很感兴趣,在阅读源码的过程中,有一些地方不太理解,源码的gettet模块是客户端处理文件分发和共享的模块,其中有一个componnet中有一个ratelimiter的类是用来限速的,其中有几个关键变量token和window是具体代表什么呢?没有特别弄明白。
另外,我想问一下您的主机限速的逻辑是什么呢?是怎么保证该主机的速度不超过其设置的运行速度的呢?
RT,我在本地启动两个dfget进程,同时下载一个在线文件,报了如下错误,请问是什么原因呢
本地通过docker,启了supernode进程
docker run -d -p 8001:8001 -p 8002:8002 f28c73445883
我的dragonfly.conf配置如下
[node]
address=127.0.0.1,127.0.0.1
dfclient 报错日志:
[2017-12-15 18:56:47,844] WARNING sign:70462-1513335389.992 lineno:167 : has not available pieceTask,maybe resource lack
[2017-12-15 18:56:47,848] WARNING sign:70462-1513335389.992 lineno:167 : has not available pieceTask,maybe resource lack
[2017-12-15 18:56:47,851] INFO sign:70462-1513335389.992 lineno:110 : pull piece task result:{u'msg': u'piece resource lack', u'code': 602} and sleep 1.432 ...
[2017-12-15 18:56:47,938] ERROR sign:70469-1513335392.125 lineno:332 : piece range:0-5242879 error,realMd5:70461da8b94c6ca5d2fda3260c5a8c3b,expectedMd5:436a148497d25f524d222019b174f54f,dstIp:127.0.0.1,total:162
[2017-12-15 18:56:47,943] ERROR sign:70469-1513335392.125 lineno:332 : piece range:10485760-15728639 error,realMd5:70461da8b94c6ca5d2fda3260c5a8c3b,expectedMd5:7d05eb939cd37dc6f612d09a9b6319cc,dstIp:127.0.0.1,total:162
[2017-12-15 18:56:47,944] ERROR sign:70469-1513335392.125 lineno:332 : piece range:5242880-10485759 error,realMd5:70461da8b94c6ca5d2fda3260c5a8c3b,expectedMd5:d99f987ccca3083eaac61338d800da60,dstIp:127.0.0.1,total:162
[2017-12-15 18:56:47,947] WARNING sign:70469-1513335392.125 lineno:167 : has not available pieceTask,maybe resource lack
[2017-12-15 18:56:47,958] WARNING sign:70469-1513335392.125 lineno:167 : has not available pieceTask,maybe resource lack
[2017-12-15 18:56:47,965] ERROR sign:70469-1513335392.125 lineno:332 : piece range:10485760-15728639 error,realMd5:70461da8b94c6ca5d2fda3260c5a8c3b,expectedMd5:7d05eb939cd37dc6f612d09a9b6319cc,dstIp:127.0.0.1,total:162
[2017-12-15 18:56:47,970] INFO sign:70469-1513335392.125 lineno:110 : pull piece task result:{u'msg': u'piece resource lack', u'code': 602} and sleep 0.665 ...
求大神帮忙看下是什么原因,谢谢了^_^
文档很简单,但部署完后万群不知道client节点如何与CM交互
dfget的测试,完全和wget没什么区别,理解上CM是作为源站的代理存在的,但完全不知道怎么配置
谢谢
应该是镜像文件缺失了
Step 8/19 : RUN wget http://mirrors.hust.edu.cn/apache/tomcat/tomcat-7/v7.0.84/b
in/apache-tomcat-7.0.84.tar.gz -O /tmp/apache-tomcat-7.0.84.tar.gz && cd /us
r/local && tar xzf /tmp/apache-tomcat-7.0.84.tar.gz && ln -s /usr/local/apac
he-tomcat-7.0.84 /usr/local/tomcat && rm /tmp/apache-tomcat-7.0.84.tar.gz
---> Running in c07a53d81f55
--2018-02-22 09:40:12-- http://mirrors.hust.edu.cn/apache/tomcat/tomcat-7/v7.0.
84/bin/apache-tomcat-7.0.84.tar.gz
Resolving mirrors.hust.edu.cn (mirrors.hust.edu.cn)... 202.114.18.160
Connecting to mirrors.hust.edu.cn (mirrors.hust.edu.cn)|202.114.18.160|:80... co
nnected.
HTTP request sent, awaiting response... 404 Not Found
2018-02-22 09:40:16 ERROR 404: Not Found.
The command '/bin/sh -c wget http://mirrors.hust.edu.cn/apache/tomcat/tomcat-7/v
7.0.84/bin/apache-tomcat-7.0.84.tar.gz -O /tmp/apache-tomcat-7.0.84.tar.gz &
& cd /usr/local && tar xzf /tmp/apache-tomcat-7.0.84.tar.gz && ln -s /usr/lo
cal/apache-tomcat-7.0.84 /usr/local/tomcat && rm /tmp/apache-tomcat-7.0.84.t
ar.gz' returned a non-zero code: 8
dfclient.log
2019-01-23 18:49:53.725 INFO sign:14434-1548240593.723 : get cmd params:["./bin/darwin_amd64/dfget" "-u" "http://127.0.0.1:8001/a.test" "-o" "/tmp/a.test" "--console"]
2019-01-23 18:49:53.726 INFO sign:14434-1548240593.723 : get init config:{"url":"http://127.0.0.1:8001/a.test","output":"/tmp/a.test","pattern":"p2p","node":["127.0.0.1"],"console":true,"clientQueueSize":6,"startTime":"2019-01-23T18:49:53.722690597+08:00","sign":"14434-1548240593.723","user":"zj","workHome":"/Users/zj/.small-dragonfly","configFile":["/etc/dragonfly.yaml","/etc/dragonfly.conf"]}
--2019-01-23 18:49:53-- http://127.0.0.1:8001/a.test
dfget version:0.3.0
workspace:/Users/zj/.small-dragonfly sign:14434-1548240593.723
2019-01-23 18:49:53.726 INFO sign:14434-1548240593.723 : target file path:/tmp/a.test
2019-01-23 18:49:53.727 INFO sign:14434-1548240593.723 : runtimeVariable: {"MetaPath":"/Users/zj/.small-dragonfly/meta/host.meta","SystemDataDir":"/Users/zj/.small-dragonfly/data","DataDir":"/Users/zj/.small-dragonfly/data","RealTarget":"/tmp/a.test","TargetDir":"/tmp","TempTarget":"/tmp/dfget-14434-1548240593.723.tmp-108158694","Cid":"127.0.0.1-14434-1548240593.723","TaskURL":"http://127.0.0.1:8001/a.test","TaskFileName":"a.test-14434-1548240593.723","LocalIP":"127.0.0.1","PeerPort":0,"FileLength":-1,"DataExpireTime":180000000000,"ServerAliveTime":300000000000}
2019-01-23 18:49:53.728 INFO sign:14434-1548240593.723 : local http result: err:dial tcp4 127.0.0.1:0: connect: can't assign requested address, port:0 path:/check/
2019-01-23 18:49:53.857 INFO sign:14434-1548240593.723 : local http result:a.test-14434-1548240593.723 err:<nil>, port:25801 path:/check/
2019-01-23 18:49:53.857 INFO sign:14434-1548240593.723 : use peer server on port:25801
2019-01-23 18:49:53.857 INFO sign:14434-1548240593.723 : do register to one of [127.0.0.1 127.0.0.1]
2019-01-23 18:49:53.873 INFO sign:14434-1548240593.723 : do register to 127.0.0.1, res:{"code":200,"data":{"taskId":"5b1c283943dc1c8da0865907038b0401ce7f5f50b4a7bb2b8f8b7bcf7dda5d00","fileLength":0,"pieceSize":4194304}} error:<nil>
2019-01-23 18:49:53.873 INFO sign:14434-1548240593.723 : do register result:{"code":200,"data":{"taskId":"5b1c283943dc1c8da0865907038b0401ce7f5f50b4a7bb2b8f8b7bcf7dda5d00","fileLength":0,"pieceSize":4194304}} and cost:0.016s
client:127.0.0.1 connected to node:127.0.0.1
start download by dragonfly
2019-01-23 18:49:53.874 INFO sign:14434-1548240593.723 : P2P download:{"taskID":"5b1c283943dc1c8da0865907038b0401ce7f5f50b4a7bb2b8f8b7bcf7dda5d00","superNode":"127.0.0.1","dstCid":"","range":"","result":502,"status":700,"pieceSize":0,"pieceNum":0}
2019-01-23 18:49:53.879 INFO sign:14434-1548240593.723 : Pull piece task result:{"code":602,"msg":"client sucCount:0,cdn status:RUNNING,cdn sucCount:0"} and sleep 1.838s
2019-01-23 18:49:55.731 INFO sign:14434-1548240593.723 : Pull piece task result:{"code":602,"msg":"client sucCount:0,cdn status:SUCCESS,cdn sucCount:0"} and sleep 1.981s
2019-01-23 18:49:57.724 INFO sign:14434-1548240593.723 : Pull piece task result:{"code":602,"msg":"client sucCount:0,cdn status:SUCCESS,cdn sucCount:0"} and sleep 1.067s
2019-01-23 18:49:58.809 INFO sign:14434-1548240593.723 : Pull piece task result:{"code":602,"msg":"client sucCount:0,cdn status:SUCCESS,cdn sucCount:0"} and sleep 1.651s
2019-01-23 18:50:00.474 INFO sign:14434-1548240593.723 : Pull piece task result:{"code":602,"msg":"client sucCount:0,cdn status:SUCCESS,cdn sucCount:0"} and sleep 1.930s
2019-01-23 18:50:02.419 INFO sign:14434-1548240593.723 : Pull piece task result:{"code":602,"msg":"client sucCount:0,cdn status:SUCCESS,cdn sucCount:0"} and sleep 1.309s
2019-01-23 18:50:03.746 INFO sign:14434-1548240593.723 : Pull piece task result:{"code":602,"msg":"client sucCount:0,cdn status:SUCCESS,cdn sucCount:0"} and sleep 0.615s
2019-01-23 18:50:04.372 INFO sign:14434-1548240593.723 : Pull piece task result:{"code":602,"msg":"client sucCount:0,cdn status:SUCCESS,cdn sucCount:0"} and sleep 0.845s
2019-01-23 18:50:05.229 INFO sign:14434-1548240593.723 : Pull piece task result:{"code":602,"msg":"client sucCount:0,cdn status:SUCCESS,cdn sucCount:0"} and sleep 1.612s
supernode log:
2019-01-23 10:49:53.872 INFO 11 --- [http-nio-8080-exec-10] c.d.d.s.repository.TaskRepository : get file length:0 from http client about taskId:5b1c283943dc1c8da0865907038b0401ce7f5f50b4a7bb2b8f8b7bcf7dda5d00
2019-01-23 10:49:53.872 INFO 11 --- [http-nio-8080-exec-10] c.d.d.s.service.impl.CdnManagerImpl : do trigger cdn start for taskId:5b1c283943dc1c8da0865907038b0401ce7f5f50b4a7bb2b8f8b7bcf7dda5d00,httpLen:0
2019-01-23 10:49:53.873 INFO 11 --- [Thread-13] c.d.d.s.service.impl.CdnManagerImpl : do trigger cdn success for taskId:5b1c283943dc1c8da0865907038b0401ce7f5f50b4a7bb2b8f8b7bcf7dda5d00
2019-01-23 10:49:53.878 INFO 11 --- [pool-1-thread-8] c.d.d.supernode.service.cdn.Downloader : taskId:5b1c283943dc1c8da0865907038b0401ce7f5f50b4a7bb2b8f8b7bcf7dda5d00 fileUrl:http://127.0.0.1:8001/a.test on downloader
2019-01-23 10:49:53.937 INFO 11 --- [Thread-14] c.d.d.s.service.impl.CdnReporterImpl : taskId:5b1c283943dc1c8da0865907038b0401ce7f5f50b4a7bb2b8f8b7bcf7dda5d00 fileLength:0 status:SUCCESS from:local
2019-01-23 10:49:53.938 INFO 11 --- [Thread-14] c.d.d.supernode.service.cdn.SuperWriter : taskId:5b1c283943dc1c8da0865907038b0401ce7f5f50b4a7bb2b8f8b7bcf7dda5d00 readCost:0,totalCost:60,fileLength:0,realMd5:d41d8cd98f00b204e9800998ecf8427e
2019-01-23 10:49:53.938 ERROR 11 --- [Thread-14] c.d.d.s.s.impl.FileMetaDataServiceImpl : pieceMd5s is empty for taskId:5b1c283943dc1c8da0865907038b0401ce7f5f50b4a7bb2b8f8b7bcf7dda5d00
Following instruction installing server as a docker container, got the following error:
[INFO] Building image supernode:0.2.0
Jun 22, 2018 4:05:45 PM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.io.IOException) caught when processing request to {}->unix://localhost:80: Permission denied
Jun 22, 2018 4:05:45 PM org.apache.http.impl.execchain.RetryExec execute
INFO: Retrying request to {}->unix://localhost:80
Jun 22, 2018 4:05:45 PM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.io.IOException) caught when processing request to {}->unix://localhost:80: Permission denied
Jun 22, 2018 4:05:45 PM org.apache.http.impl.execchain.RetryExec execute
INFO: Retrying request to {}->unix://localhost:80
Jun 22, 2018 4:05:45 PM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.io.IOException) caught when processing request to {}->unix://localhost:80: Permission denied
Jun 22, 2018 4:05:45 PM org.apache.http.impl.execchain.RetryExec execute
INFO: Retrying request to {}->unix://localhost:80
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 14.749s
[INFO] Finished at: Fri Jun 22 16:05:45 PDT 2018
[INFO] Final Memory: 48M/869M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.spotify:docker-maven-plugin:1.0.0:build (default-cli) on project supernode: Exception caught: java.util.concurrent.ExecutionException: com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException: java.io.IOException: Permission denied -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
BUILD(supernode): FAILURE
Ubuntu 16.04
mkdir client
tar xzvf df-client.linux-amd64.tar.gz -C client
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
另外,编译源码时失败
make
rm -rf temp
mkdir temp
cp -r /root/Dragonfly/src/getter/* ./temp
export GOPATH=/root/Dragonfly/build/client;cd /root/Dragonfly/build/client/src/github.com/alibaba/Dragonfly/src/daemon/src/df-daemon;go build
handler/transport.go:41: unknown http.Transport field 'DialContext' in struct literal
handler/transport.go:42: unknown http.Transport field 'MaxIdleConns' in struct literal
handler/transport.go:43: unknown http.Transport field 'IdleConnTimeout' in struct literal
Makefile:11: recipe for target 'build' failed
make: *** [build] Error 2
假如我的docker镜像是由10层组成,镜像大小6G。但是在supernode的repo路径下只看到了7层且大小小于6G,是不是supernode缺失了部分镜像层的下载和保存。
是否可以考虑将roadmap改成Task Lists in GFM的形式.
从我使用的角度出发,碰到的问题,记录一下
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.