Giter Club home page Giter Club logo

bigscity-libcity's People

Contributors

2448845600 avatar a-l-r1 avatar apolsus avatar aptx1231 avatar aressfull avatar asabaka avatar creddittale avatar cwt-2021 avatar excelsior399 avatar hczs avatar huangjiawei128 avatar jarvisorange avatar kazeya27 avatar luiluizi avatar nadiaaaaachen avatar nickhan-cs avatar potassiumwings avatar qwtdgh avatar songhuahu-umd avatar wenmellors avatar xbr-1111 avatar zhouconch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bigscity-libcity's Issues

PEMSD3数据读入问题

2022-07-18 05:29:39,733 - INFO - Loaded file PEMSD3.geo, num_nodes=358
2022-07-18 05:29:39,735 - INFO - set_weight_link_or_dist: link
2022-07-18 05:29:39,735 - INFO - init_weight_inf_or_zero: zero
2022-07-18 05:29:39,738 - INFO - Loaded file PEMSD3.rel, shape=(358, 358)
2022-07-18 05:29:39,738 - INFO - Loading file PEMSD3.dyna
TypeError: only size-1 arrays can be converted to Python scalars

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "run_model.py", line 36, in
run_model(task=args.task, model_name=args.model, dataset_name=args.dataset,
File "/root/Bigscity-LibCity/libcity/pipeline/pipeline.py", line 48, in run_model
train_data, valid_data, test_data = dataset.get_data()
File "/root/Bigscity-LibCity/libcity/data/dataset/traffic_state_datatset.py", line 932, in get_data
x_train, y_train, x_val, y_val, x_test, y_test = self._generate_train_val_test()
File "/root/Bigscity-LibCity/libcity/data/dataset/traffic_state_datatset.py", line 851, in _generate_train_val_test
x, y = self._generate_data()
File "/root/Bigscity-LibCity/libcity/data/dataset/traffic_state_datatset.py", line 777, in _generate_data
df = self._load_dyna(filename) # (len_time, ..., feature_dim)
File "/root/Bigscity-LibCity/libcity/data/dataset/traffic_state_point_dataset.py", line 39, in _load_dyna
return super()._load_dyna_3d(filename)
File "/root/Bigscity-LibCity/libcity/data/dataset/traffic_state_datatset.py", line 269, in _load_dyna_3d
data = np.array(data, dtype=np.float) # (len(self.geo_ids), len_time, feature_dim)
ValueError: setting an array element with a sequence.

选择这个数据集时,出现如上错误

json文件解析问题..

同步到最新版本后,运行程序,例如python run_model.py --task traffic_state_pred --model GRU --dataset METR_LA都会提示json文件解析错误(如下),不知道是哪出了问题..之前的版本并不会出现这个问题
Traceback (most recent call last):
File "run_model.py", line 49, in
run_model(task=args.task, model_name=args.model, dataset_name=args.dataset,
File "/root/Bigscity-LibCity/libcity/pipeline/pipeline.py", line 30, in run_model
config = ConfigParser(task, model_name, dataset_name,
File "/root/Bigscity-LibCity/libcity/config/config_parser.py", line 25, in init
self._load_default_config()
File "/root/Bigscity-LibCity/libcity/config/config_parser.py", line 69, in _load_default_config
task_config = json.load(f)
File "/root/miniconda/lib/python3.8/json/init.py", line 293, in load
return loads(fp.read(),
File "/root/miniconda/lib/python3.8/json/init.py", line 357, in loads
return _default_decoder.decode(s)
File "/root/miniconda/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/root/miniconda/lib/python3.8/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 408 column 9 (char 15204)

strnn_encoder中的lai,longi 逻辑错误

geo_coord key值为地点的原始编号,[loc_lati, loc_longi]为对应的经纬度。

            loc_id, loc_longi, loc_lati = int(tokens[0]), eval(tokens[2]), eval(tokens[3])
            self.geo_coord[loc_id] = [loc_lati, loc_longi]

strnn_encoder.py中
原始:

87            lati = self.geo_coord[self.location2id[current_points[-1]]][0]
89            longi = self.geo_coord[self.location2id[current_points[-1]]][1]

self.location2id是将 原始的地点编号映射到地点的重新编号,这里存在问题。

认为应该修改成:

87            lati = self.geo_coord[current_points[-1]][0]
89            longi = self.geo_coord[current_points[-1]][1]

AGCRN模型在PEMSD4,和PEMSD8数据集上的问题

你好,我在用AGCRN模型在PEMSD4,和PEMSD8这两个数据集上跑的时候,实验结果出现了一些问题,MAPE指标本身值有问题之外还没有规律性,其他指标与原文也有较大的出入
image

GEML训练问题

您好,我用你们提供的处理好的数据“NYCTAXI202004-202006_OD”运行GEML模型进行模型训练,训练后结果进行相应预测。目前存在以下问题:1.训练后模型预测结果接近为0,加载cache中npz结果,使用np.max函数提取预测值的最大值,结果接近0;2.而在训练过程中,通过log发现,模型loss基本上没有大规模下降。
在迁移到我自己的数据中时,也出现该现象。

TAXIBJ dataset

It seems that some files are missing when i run STDN model with TAXIBJ dataset:
FileNotFoundError: [Errno 2] No such file or directory: './raw_data/TAXIBJ/TAXIBJ2013.gridod'

Logging error occurred in traj_loc_pred task

Hi, thanks for the awesome work.
A logging error happened during the evaluation process. The default task in code https://github.com/LibCity/Bigscity-LibCity/blob/master/run_model.py is working well. However, there was an error that occurred when I switch the task to traj_loc_pred using foursquare_nyc dataset.

Code to Reproduce

    # code changes of main function in run_model.py
    parser.add_argument('--task', type=str, default='traj_loc_pred', help='the name of task')
    parser.add_argument('--model', type=str, default='DeepMove', help='the name of model')
    parser.add_argument('--dataset', type=str, default='foursquare_nyc', help='the name of dataset')

Actual result

--- Logging error ---

Traceback (most recent call last):
  File "D:\Software\Anaconda3\envs\LibCitybased\lib\logging\__init__.py", line 1025, in emit
    msg = self.format(record)
  File "D:\Software\Anaconda3\envs\LibCitybased\lib\logging\__init__.py", line 869, in format
    return fmt.format(record)
  File "D:\Software\Anaconda3\envs\LibCitybased\lib\logging\__init__.py", line 608, in format
    record.message = record.getMessage()
  File "D:\Software\Anaconda3\envs\LibCitybased\lib\logging\__init__.py", line 369, in getMessage
    msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
  File "xx/Bigscity-LibCity-master/run_model.py", line 43, in <module>
    train=args.train, other_args=other_args)
  File "xx\Bigscity-LibCity-master\libcity\pipeline\pipeline.py", line 63, in run_model
    executor.evaluate(test_data)
  File "xx\Bigscity-LibCity-master\libcity\executor\traj_loc_pred_executor.py", line 108, in evaluate
    self.evaluator.save_result(self.evaluate_res_dir)
  File "xx\Bigscity-LibCity-master\libcity\evaluator\traj_loc_pred_evaluator.py", line 96, in save_result
    self._logger.info('evaluate result is ', json.dumps(self.result, indent=1))
Message: 'evaluate result is '
Arguments: ('{\n "Recall@1": 0.16751398997842068\n}',)
--- Logging error ---
Traceback (most recent call last):
  File "D:\Software\Anaconda3\envs\LibCitybased\lib\logging\__init__.py", line 1025, in emit
    msg = self.format(record)
  File "D:\Software\Anaconda3\envs\LibCitybased\lib\logging\__init__.py", line 869, in format
    return fmt.format(record)
  File "D:\Software\Anaconda3\envs\LibCitybased\lib\logging\__init__.py", line 608, in format
    record.message = record.getMessage()
  File "D:\Software\Anaconda3\envs\LibCitybased\lib\logging\__init__.py", line 369, in getMessage
    msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
  File "xx/Bigscity-LibCity-master/run_model.py", line 43, in <module>
    train=args.train, other_args=other_args)
  File "xx\Bigscity-LibCity-master\libcity\pipeline\pipeline.py", line 63, in run_model
    executor.evaluate(test_data)
  File "xx\Bigscity-LibCity-master\libcity\executor\traj_loc_pred_executor.py", line 108, in evaluate
    self.evaluator.save_result(self.evaluate_res_dir)
  File "xx\Bigscity-LibCity-master\libcity\evaluator\traj_loc_pred_evaluator.py", line 96, in save_result
    self._logger.info('evaluate result is ', json.dumps(self.result, indent=1))
Message: 'evaluate result is '
Arguments: ('{\n "Recall@1": 0.16751398997842068\n}',)

Process finished with exit code 0

Issue When Running run_model.py (Key Error)

I have already installed the necessary requirements and now about to try to run run_model.py (with map matching task, STMatching model and T_DRIVE_SMALL dataset, and the rest remains default) however I am getting a key error. I am not sure how I can resolve the error? Thank you so much.

gcalix@tiger:~/Bigscity-LibCity$ python run_model.py --task map_matching --model STMatching --dataset T_DRIVE_SMALL 2022-07-28 20:24:15,996 - INFO - Log directory: ./libcity/log 2022-07-28 20:24:15,996 - INFO - Begin pipeline, task=map_matching, model_name=STMatching, dataset_name=T_DRIVE_SMALL, exp_id=87895 2022-07-28 20:24:15,996 - INFO - {'task': 'map_matching', 'model': 'STMatching', 'dataset': 'T_DRIVE_SMALL', 'saved_model': True, 'train': False, 'seed': 0, 'dataset_class': 'MapMatchingDataset', 'executor': 'MapMatchingExecutor', 'evaluator': 'MapMatchingEvaluator', 'k': 5, 'r': 200, 'mu': 0, 'sigma': 20, 'window_size': 40, 'delta_time': True, 'metrics': ['RMF', 'AN', 'AL'], 'save_modes': ['csv'], 'geo': {'including_types': ['Polygon'], 'Polygon': {'row_id': 'num', 'column_id': 'num'}}, 'grid': {'including_types': ['state'], 'state': {'row_id': 32, 'column_id': 32, 'inflow': 'num', 'outflow': 'num'}}, 'data_col': ['inflow', 'outflow'], 'data_files': ['T_DRIVE_SMALL'], 'geo_file': 'T_DRIVE_SMALL', 'output_dim': 2, 'init_weight_inf_or_zero': 'inf', 'set_weight_link_or_dist': 'dist', 'calculate_weight_adj': False, 'weight_adj_epsilon': 0.1, 'time_intervals': 3600, 'device': device(type='cuda', index=0), 'exp_id': 87895} Traceback (most recent call last): File "run_model.py", line 38, in <module> train=args.train, other_args=other_args) File "/home/gcalix/Bigscity-LibCity/libcity/pipeline/pipeline.py", line 46, in run_model dataset = get_dataset(config) File "/home/gcalix/Bigscity-LibCity/libcity/data/utils.py", line 22, in get_dataset config['dataset_class'])(config) File "/home/gcalix/Bigscity-LibCity/libcity/data/dataset/map_matching_dataset.py", line 48, in __init__ self.with_rd_speed = ('speed' in config['rel']['geo'].keys()) File "/home/gcalix/Bigscity-LibCity/libcity/config/config_parser.py", line 141, in __getitem__ raise KeyError('{} is not in the config'.format(key)) KeyError: 'rel is not in the config

关于torch_scatter

我在安装你们的requirement.txt时遇到了无法安装torch_scatter的问题,于是我从网址上下载了这个库的whl文件,但是运行过程中依旧出现了错误。
QQ截图20210928155016

Grid-OD数据问题

代码中对于gridod数据的处理方法如下:
image
代码位置:libcity/data/dataset/traffic_state_grid_od_dataset.py里边的_load_grid_od_6d_load_grid_od_4d

他的这个遍历顺序是起点行、列,终点行、列,然后直接取整个len_time长度的数据进行赋值。但是现在的gridod数据格式如下:
image
按照现在的代码读进来的数据是错误的。现在数据读取的代码对于所有的数据都是统一的(先行后列、先起点后终点),所以数据集需要修复一下。 @l782993610

conflict issues

Nice work!!!

I tried to run quick start according to the readme, but found the following error:

TypeError: Descriptors cannot not be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
 
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

This seems to conflict with the package ‘ray’, here is my solution:

pip uninstall protobuf
pip install protobuf==3.20.1

In addition, do you have a plan to support single machine-multi card and multi machine-multi card ?

关于PeMSD4数据集跑模型遇到的相关问题

1.用STMGAT模型进行实验时,提示“dgl._ffi.base.DGLError: There are 0-in-degree nodes in the graph, output for those nodes will be invalid. This is harmful for some applications, causing silent performance regression. Adding self-loop on the input graph by calling g = dgl.add_self_loop(g) will resolve the issue. Setting allow_zero_in_degree to be True when constructing this module will suppress the check and let the code run.”
2.用HGCN模型进行实验时,提示“IndexError: index 8 is out of bounds for dimension 0 with size 7”
3.用ATDM模型进行实验时,提示“RuntimeError: shape '[64, 3, 12, 307]' is invalid for input of size 235776”
4.用DKFN模型进行实验时,提示“ValueError: Input contains NaN, infinity or a value too large for dtype('float32').”
希望能得到您的帮助

CSTN 数据集不一致问题

我发现您在百度网盘提供的CSTN数据版本 落后于您在Bigscity-LibCity-Datasets脚本生成的数据 因为我发现这俩者数据不一致 显然新脚本生产的数据才是较为准备的 但是我也发现了一些其他问题
image
我发现row_id 与col_id 在定义与存储的时候好像错位了 但我不知道究竟是哪一个在前 希望能得到您的帮助

数据读取问题

可改进目前的数据读取的不足:

  • geo文件的geo_id和dyna文件中entity_id的顺序必须保持一致
  • 网格则必须按照先遍历行后遍历列的顺序编码geo_id

GPU question

是不是一定要安装pytorch-GPU版才能使用?谢谢

ETA数据集错误

从百度云盘上下载的数据集Chengdu_Taxi_Sample1 和Beijing_Taxi_Sample 无法使用,前者缺失geo文件,后者Error tokenizing data. C error: EOF inside string starting at row 92677

DeepMove 修正意见

  • DeepMove 放到轨迹下一跳任务文件夹下
  • DeepMove 未继承 AbstractModel

运行PSEMD3数据集时遇到的问题

当数据集使用PEMSD3,并且模型选择 ASTGCN, MSTGCN,ASTGCNCommon, MSTGCNCommon会提示一个’scipy.sparse.linalg.eigen.arpack.arpack.ArpackNoConvergence: ARPACK error -1: No convergence (3581 iterations, 0/1 eigenvectors converged) [ARPACK error -14: SNAUPD did not find any eigenvalues to sufficient accuracy.]‘
请问如何修改

关于NYCTAXI_OD数据集的相关问题

百度云中NYCTAXI202004-202006数据集的NYCTAXI202004-202006.od文件中显示This file is too big, please connect the author.请问下这个文件需要在哪里下载
希望能得到您的帮助

MPR

有两个问题:
1、在处理最后以每个点为终点的转移概率矩阵的时候,需要用到马尔科夫链 𝑉 = 𝐷 + 𝑄 ⋅ 𝐷 + 𝑄2 ⋅ 𝐷 + ⋅⋅⋅ + 𝑄𝑡−1 ⋅ 𝐷 因为用的矩阵是稀疏矩阵 所以乘法的复杂度有O2 而且通常需要sys.MAXSIZE的乘方 感觉不太跑得动 不知如何处理
2、在聚类处理后很多轨迹会出现重复的情况 比如原来轨迹是1 2 3 4 聚类以后1 3 变成1个点了 轨迹就变成1 2 1 4了 前两步相当于无效轨迹 最终测试集量准确度的时候会有很大的误差 但如果去重又会毁坏轨迹数据的真实性

CARA 的data_feature['poi_profile']缺失

CARA.py 取出‘poi_profile’。

258        self.poi_profile = data_feature['poi_profile']

但是在cara_encoder.py中并没有'poi_profile'这个key值。
cara_encoder.py

126        self.data_feature = {
127           'loc_size': self.loc_id + 1,
128            'tim_size': self.tim_max + 2,
129            'uid_size': self.uid,
130           'loc_pad': loc_pad,
131            'tim_pad': tim_pad,
132            'id2locid': self.id2locid
133        }

copy.deepcopy()

def collator(indices): batch = Batch(feature_name, pad_item, pad_max_len) for item in indices: batch.append(copy.deepcopy(item)) batch.padding() return batch
为啥这里加了deepcopy呢,append()函数感觉并不会修改item本身。会不会影响速度?

GEML,数据格式错误

您好,我用你们提供的处理好的数据“NYCTAXI202004-202006_OD”运行GEML模型发生错误。
命令行为:python run_model.py --task traffic_state_pred --model GEML --dataset NYCTAXI202004-202006_OD。
一开始提示错误为 No such file or directory: './raw_data/NYCTAXI202004-202006_OD/NYCTAXI202004-202006_OD.rel'。
然后我将文件中NYCTAXI202004-202006.rel 文件改为 NYCTAXI202004-202006_OD.rel 运行。
“- INFO - Loading file NYCTAXI202004-202006.od”时又发生错误:KeyError: "['inflow', 'outflow'] not in index"
请问这个模型使用哪个数据集运行?是数据集的问题还是代码的问题?

自动调参工具的问题

python hyper_tune.py --task traffic_state_pred --model GRU --dataset METR_LA --space_file sample_space_file
使用后提示

AttributeError: module 'idna' has no attribute 'IDNAError'

(raylet) ModuleNotFoundError: No module named 'aiohttp.signals'

Traceback (most recent call last):
  File "hyper_tune.py", line 65, in <module>
    other_args=other_args)
  File "/home/jy/Bigscity-LibCity/libcity/pipeline/pipeline.py", line 196, in hyper_parameter
    local_dir='./libcity/cache/hyper_tune', num_samples=num_samples)
  File "/home/jy/.local/lib/python3.7/site-packages/ray/tune/tune.py", line 364, in run
    callbacks, sync_config, metric=metric, loggers=loggers)
  File "/home/jy/.local/lib/python3.7/site-packages/ray/tune/utils/callback.py", line 120, in create_default_callbacks
    _sync_to_driver = detect_sync_to_driver(sync_config.sync_to_driver)
  File "/home/jy/.local/lib/python3.7/site-packages/ray/tune/syncer.py", line 461, in detect_sync_to_driver
    from ray.tune.integration.docker import DockerSyncer
  File "/home/jy/.local/lib/python3.7/site-packages/ray/tune/integration/docker.py", line 5, in <module>
    from ray.autoscaler.sdk import rsync, configure_logging
  File "/home/jy/.local/lib/python3.7/site-packages/ray/autoscaler/sdk.py", line 9, in <module>
    from ray.autoscaler._private import commands
  File "/home/jy/.local/lib/python3.7/site-packages/ray/autoscaler/_private/commands.py", line 28, in <module>
    from ray.autoscaler._private.util import validate_config, hash_runtime_conf, \
  File "/home/jy/.local/lib/python3.7/site-packages/ray/autoscaler/_private/util.py", line 6, in <module>
    import jsonschema
  File "/home/jy/anaconda3/envs/Pytorch171/lib/python3.7/site-packages/jsonschema/__init__.py", line 14, in <module>
    from jsonschema._format import (
  File "/home/jy/anaconda3/envs/Pytorch171/lib/python3.7/site-packages/jsonschema/_format.py", line 240, in <module>
    @_checks_drafts(draft7="idn-hostname", raises=idna.IDNAError)
AttributeError: module 'idna' has no attribute 'IDNAError'
(raylet) Traceback (most recent call last):
(raylet)   File "/home/jy/.local/lib/python3.7/site-packages/ray/new_dashboard/agent.py", line 22, in <module>
(raylet)     import ray.new_dashboard.utils as dashboard_utils
(raylet)   File "/home/jy/.local/lib/python3.7/site-packages/ray/new_dashboard/utils.py", line 20, in <module>
(raylet)     import aiohttp.signals
(raylet) ModuleNotFoundError: No module named 'aiohttp.signals'


max_epoch如何进行更改?

在'task_config.json'确定待运行模型的执行器为TrafficStateExecutor,于是在在'TrafficStateExecutor.json'将"max_epoch": 1000更改为1000以后,发现实验过程中max_epoch仍为100,请问如何更改相关参数改变max_epoch呀?

CARA模型不存在对应的evaluator配置

轨迹下一跳任务中的CARA模型 在task_config.json中指定"evaluator": "CARALocPredEvaluator"
但是config/evaluator中不包含CARALocPreEvaluator.json文件。
可能要添加CARALocPreEvaluator.json。

MTGNN only captures the first two steps and then fails

Thanks for building this excellent library!

I read the ranking of the models here: https://libcity.ai/#/ranking/METR-LA, and try to reproduce the results using METR-LA data.

I used the default setting (input_window 12 and output_window 12) but found the metrics of MTGNN is strange:

MAE MAPE MSE RMSE masked_MAE masked_MAPE masked_MSE masked_RMSE R2 EVAR
9.465062141 inf 481.758728 21.94900322 2.327759743 0.05680662 17.26930046 4.15563488 0.076125567 0.191385031
9.732895851 inf 484.2471008 22.00561523 2.659701586 0.068691827 26.14606094 5.113321781 0.071359592 0.189329565
16.70842361 inf 533.5368042 23.09841537 11.39525414 0.301383018 191.5453339 13.83999062 -0.023237876 0
16.71042061 inf 533.5725098 23.09918785 11.3973341 0.301409751 191.5722046 13.8409605 -0.023193555 -3.58E-07
16.70780754 inf 533.5404053 23.09849358 11.39412403 0.301374763 191.5185547 13.83902264 -0.023288411 -2.07E-05
16.70010757 inf 533.4511108 23.09656143 11.38522053 0.301252007 191.4121399 13.83517742 -0.023419132 -1.19E-07
16.70577621 inf 533.5248413 23.09815788 11.39150429 0.301328242 191.4844055 13.83778954 -0.02328399 -2.38E-07
16.70551872 inf 533.5244751 23.09814835 11.3910265 0.301333904 191.47229 13.8373518 -0.023330131 -1.19E-07
16.70627594 inf 533.5570679 23.09885406 11.39166451 0.301335871 191.4946899 13.83816051 -0.02331653 1.19E-07
16.70834541 inf 533.6000366 23.09978485 11.39379406 0.301371098 191.5291443 13.83940601 -0.023293527 0
16.70842552 inf 533.6105347 23.10001183 11.39365578 0.301361412 191.5263062 13.83930302 -0.023285843 -1.19E-07
16.70877266 inf 533.6110229 23.10002327 11.39382648 0.301376134 191.5121155 13.83879089 -0.023291072 0

You can find that MTGNN only learns the first two steps well, then the performance almost remains unchanged.

Gensim

Gensim是什么模型?只看到了executor

task: traj_loc_pred; model: STRNN; dataset:foursquare_tky. can't run

I have read your code for a long time, thank you for your great job! but, I think it has some bugs.
when I want to use STRNN to test trajectory location prediction with dataset--foursquare_tky, I get this bug:

File "/workdir/Bigscity-LibTraffic-master/libtraffic/model/trajectory_loc_prediction/STRNN.py", line 21, in init
self.poi_profile = data_feature['poi_profile']
KeyError: 'poi_profile'

and I think the input format of STRNN has some problems too:
you used:
"dataset_class": "TrajectoryDataset",
"executor": "TrajLocPredExecutor",
"evaluator": "TrajLocPredEvaluator",
"traj_encoder": "StandardTrajectoryEncoder"
I'm afraid they don't match,because I see there exists StrnnEncoder,but,I still can find the “data_feature['poi_profile']“

第一次使用发现的问题

感谢开源这么好的框架,第一次上手发现如下问题:

  1. quick start 不详细,我按照quick start的教程开始做,python run_model.py 是运行不了的,因为没有指定参数。 而且你们没有说清楚需要下载哪些数据集之类的,让人找不到北。对新手不太友好。 之后运行python run_model.py --task traffic_state_pred --model DCRNN --dataset METR_LA的时候发现了很多安装包的问题,即使安装了requirements里面的包还是有很多包没覆盖到。 再次运行还是报错:代码import 库的时候是这样:import torch.tensor as tensor. 报错表示找不到tensor,修改为 from torch import tensor 解决了。

  2. 到现在还是有问题没解决,torch-sparse 安装不上,我猜测是库和库之间依赖的问题!版本要对应起来!
    网上的解决办法是:
    pip install torch==1.2.0
    pip install torch_geometric==1.4.1
    pip install torch_sparse==0.4.4
    pip install torch_scatter==1.4.0
    pip install torch_cluster==1.4.5
    但你们的torch版本很高,我怕重装torch会出现其他的版本兼容问题。
    这个问题不知道该如何解决。
    总之,你们的框架写的很全面,早上起来刷到你们,我被震惊了,很佩服!这么大的工作量,代码,数据,文档,还有网站等等一些工作,对技术要求也很高,你们做到了,敬佩难以言表,但是使用体验很差,望改进!
    最后还是向你们致敬。

自动化调参报错,程序密码不对,redis设置为无密码。程序的密码设置?

Traceback (most recent call last):
File "E:\conda\envs\pytorch\lib\site-packages\ray_private\services.py", line 655, in wait_for_redis_to_start
redis_client.client_list()
File "E:\conda\envs\pytorch\lib\site-packages\redis\client.py", line 1194, in client_list
return self.execute_command('CLIENT LIST')
File "E:\conda\envs\pytorch\lib\site-packages\redis\client.py", line 898, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "E:\conda\envs\pytorch\lib\site-packages\redis\connection.py", line 1192, in get_connection
connection.connect()
File "E:\conda\envs\pytorch\lib\site-packages\redis\connection.py", line 567, in connect
self.on_connect()
File "E:\conda\envs\pytorch\lib\site-packages\redis\connection.py", line 643, in on_connect
auth_response = self.read_response()
File "E:\conda\envs\pytorch\lib\site-packages\redis\connection.py", line 739, in read_response
response = self._parser.read_response()
File "E:\conda\envs\pytorch\lib\site-packages\redis\connection.py", line 484, in read_response
raise response
redis.exceptions.AuthenticationError: Client sent AUTH, but no password is set

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "hyper_tune.py", line 52, in
other_args=other_args)
File "H:\Bigscity-LibCity\libcity\pipeline\pipeline.py", line 211, in hyper_parameter
local_dir='./libcity/cache/hyper_tune', num_samples=num_samples)
File "E:\conda\envs\pytorch\lib\site-packages\ray\tune\tune.py", line 298, in run
_ray_auto_init()
File "E:\conda\envs\pytorch\lib\site-packages\ray\tune\tune.py", line 681, in _ray_auto_init
ray.init()
File "E:\conda\envs\pytorch\lib\site-packages\ray_private\client_mode_hook.py", line 82, in wrapper
return func(*args, **kwargs)
File "E:\conda\envs\pytorch\lib\site-packages\ray\worker.py", line 896, in init
ray_params=ray_params)
File "E:\conda\envs\pytorch\lib\site-packages\ray\node.py", line 248, in init
self.start_head_processes()
File "E:\conda\envs\pytorch\lib\site-packages\ray\node.py", line 894, in start_head_processes
self.start_redis()
File "E:\conda\envs\pytorch\lib\site-packages\ray\node.py", line 714, in start_redis
port_denylist=self._ray_params.reserved_ports)
File "E:\conda\envs\pytorch\lib\site-packages\ray_private\services.py", line 881, in start_redis
port_denylist=port_denylist)
File "E:\conda\envs\pytorch\lib\site-packages\ray_private\services.py", line 1029, in _start_redis_instance
wait_for_redis_to_start("127.0.0.1", port, password=password)
File "E:\conda\envs\pytorch\lib\site-packages\ray_private\services.py", line 666, in wait_for_redis_to_start
redis_ip_address, redis_port)) from authEx
RuntimeError: Unable to connect to Redis at 127.0.0.1:6379.

Ask for help of On-Demand Service Predition

Thanks for your great job!

I am interested in the CCRNN model for the on-demand service prediction task. However, I don't know which dataset can be used in this model. Could you please help tell me the answer?
By the way, a list to introduce which dataset is suitable for the corresponding task may be greatly helpful for us.

Thanks again and look forward to your reply~

可视化

网格数据配合一般的模型,输出没保存成网格

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.