Giter Club home page Giter Club logo

mil's People

Contributors

baek-jinoo avatar matwilso avatar tianheyu927 avatar townie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mil's Issues

I can't find the gym env "ReacherMILTest-v1"

I have installed mujoco,mujoco_py,gym successfully.But when I test the reach script, I can't find the gym env "ReacherMILTest-v1" anymore.

It seems like a self-build environment, so can anyone tell me how can I use the gym env?

ValueError: Tensor conversion requested dtype string for Tensor with dtype float32

Hello, I run your code and just got the following error:

Traceback (most recent call last):
  File "main.py", line 310, in <module>
    main()
  File "main.py", line 268, in main
    train_image_tensors = data_generator.make_batch_tensor(network_config, restore_iter=FLAGS.restore_iter)
  File "/media/niudong/资源/Projects/mil/data_generator.py", line 216, in make_batch_tensor
    filename_queue = tf.train.string_input_producer(tf.convert_to_tensor(all_filenames), shuffle=False)
  File "/home/niudong/workon_home/milpy2/local/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 245, in string_input_producer
    string_tensor = ops.convert_to_tensor(string_tensor, dtype=dtypes.string)
  File "/home/niudong/workon_home/milpy2/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1048, in convert_to_tensor
    as_ref=False)
  File "/home/niudong/workon_home/milpy2/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1144, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/home/niudong/workon_home/milpy2/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 981, in _TensorTensorConversionFunction
    (dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype string for Tensor with dtype float32: 'Tensor("Const:0", shape=(0,), dtype=float32)'

I guess if it happened because of tensorflow version and I used tf1.11.
Hope to find some cues!

Thanks
Best Wishes

Issues with data_generator.py

I keep getting KeyError: 0
self.state_idx = range(demos[0]['demoX'].shape[-1])
self._dU = demos[0]['demoU'].shape[-1]

and
states = np.vstack(demos[i]['demoX'] for i in self.train_idx) # hardcoded here to solve the memory issue
ValueError: need at least one array to concatenate

Thanks

About Temporal Convolution in mil.py

Hi! I noticed that according to the graph in One-Shot Imitation from Observing Humans, the input of temporal convolution should be the concatenation of feature of convolutional layers and the output feature of fully connected layers. But in the mil.py the temporal convolution only takes fc_output as its input.

what does "learn_final_eept" mean?

('learn_final_eept', False, 'learn an auxiliary loss for predicting final end-effector pose')

I have some doubt about the flag" learn_final_eept", what does it mean? Because I see that in all the demo files in "scripts ", " learn_final_eept" is set to False.
Thank you.

Problem running run_sim_push_video_only.sh

The scripts 'run_sim_push.sh' and 'run_sim_vision_reach.sh' (given in the script directory data/sim_vision_reach_noisy does not exist, changed into data/sim_vision_reach) were run successfully, however the run_sim_push_video_only.sh doesn't give the model with action (judging from the saved model's folder name no_action.zero_state.).
Saving model to: /tmp/data/sim_push.xavier_init.4_conv.4_strides.16_filters.3_fc.200_dim.bt_dim_20.mbs_3.ubs_1.numstep_1.updatelr_0.01.clip_10.conv_bt.fp.no_action.zero_state.two_heads.1d_conv_act_3_32_10x1_filters/model_nnn ./scripts/run_sim_push_video_only.sh: line 8: --temporal_num_layers=3: command not found
Previously suggested solution to use tf version 1.4 didn't solve the problem.

Noisy demonstrations

Hello,
I'm interested in DAML and want to try the noisy demonstrations for domain shift. Where can I find the noisy demonstration?
Thanks.

Something wrong with the code of MIL

Hello,
I tried to run your code according "run_sim_push_video_only.sh" in training. But the results were different when loading the same trained model with the same testing data in each test.

low success rate

I suspect there is something wrong with the code.
I ran your code according your instructions,and tested in test_sim_vision_reach.sh but the final success rate is just around 10%.

One-shot imitation learning

Hi!

This is more like a question then an issue.

I read one-shot imitation learning which you used as a reference.

And I can't find the code about it.

Does this model also use attention concept like this?

mysterious tf.transpose in data_generator.py

Is there any deep meaning in the comment transpose to mujoco setting for images ?
(what is mujoco setting??)

I think image = tf.transpose(image, perm=[0, 3, 1, 2]) is more natural.

image = tf.transpose(image, perm=[0, 3, 2, 1]) # transpose to mujoco setting for images

Thanks.

Please tell me mujoco version.

I'm very interested in MIL and I just ran the test_sim_push(video only) code, but the performance was not good. (test_sim_push with --no_action=True & --no_state=True & --zero_state , then the Success Rate so far was 46%)
In meta-train phase, the post-loss decreased quickly and converged to 29~30.

Please tell me mujoco and mujoco-py version, and if possible, please tell me the conceivable causes for that.

Thank you.

Missing xmls in the data

Hi,

It seems like this problem has been already marked as closed but the texture folder is missing xmls. I first unzipped it manually just to check if xmls are present in the textures folder but I only see pngs.

Is the zipped dataset missing xmls or am I doing something wrong?

Thanks,
Nikesh

AttributeError: 'ReacherMILEnv' object has no attribute 'viewer'

Traceback (most recent call last):
File "/home/user/gym-mil/gym/core.py", line 203, in del
self.close()
File "/home/user/gym-mil/gym/core.py", line 164, in close
self.render(close=True)
File "/home/user/gym-mil/gym/core.py", line 150, in render
return self._render(mode=mode, close=close)
File "/home/user/gym-mil/gym/envs/mujoco/mujoco_env.py", line 104, in _render
if self.viewer is not None:
AttributeError: 'ReacherMILEnv' object has no attribute 'viewer'

What makes this error?
Hope some tips,thanks.

Dataset problem

Hi, Tianhe
There are some problems with the dataset from http://rail.eecs.berkeley.edu/datasets/mil_data.zip
I use the following code to check the dataset of './data/sim_push'

import glob
import pickle
file_dir = './data/sim_push'
file_list = glob.glob(file_dir + "/*.pkl")
bad_file = []
for i in range(len(file_list)):
    try:
        with open(file_list[i], 'rb') as f:
        data = pickle.load(f)
    except:
        bad_file.append(file_list[i])
print("open(file_list): bad_file: ", len(bad_file))
print(bad_file)

There are 78 files which can't be loaded using pickle.
If I change the open(file_list[i], 'rb') as open(file_list[i]), there would be 753 files which can't be loaded (all the files can't be loaded normally).

I use python 2.7.6 and python 3.6.3 to load your pickle file, but get the same problem
For python 2.7.6: I get the print(pickle.format_version) --> 2.0
For python 3.6.3: I get the print(pickle.format_version) --> 4.0

I guess that the problem may be the version of the pickle package. Could you tell me which version of pickle you used when you dump the pickle file?

run_sim_push_video_only.sh: line 8: --temporal_num_layers=3: command not found

HI, there is a list of errors occurring when I try to launch the scripts. This is the piece of error list when the run_sim_push_video_only.sh: is launched. Is it a tensorflow version problem? Tried the solution here https://github.com/tensorflow/models/issues/3705#issuecomment-375563179 but still didn't help. Can you please look what could be the problem and solution?
Number of demos: 9228 TIMER:data_generator.py:extract_supervised_data: Normalizing states (Elapsed: 0.966589s) TIMER:data_generator.py:generate_batches: Generating batches for each iteration (Elapsed: 661.176920s) WARNING:tensorflow:From ~/daml/mil/data_generator.py:194: string_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by ``tf.data``. Use tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...).
WARNING:tensorflow:From ~/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/training/input.py:276: input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...).
WARNING:tensorflow:From ~/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/training/input.py:188: limit_epochs (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensors(tensor).repeat(num_epochs).
WARNING:tensorflow:From ~/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/training/input.py:197: QueueRunner.init (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the tf.data module.
WARNING:tensorflow:From ~/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/training/input.py:197: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the tf.data module.
Generating image processing ops
WARNING:tensorflow:From ~/daml/mil/data_generator.py:196: WholeFileReader.init (from tensorflow.python.ops.io_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.map(tf.read_file).
Batching images
WARNING:tensorflow:From /daml/mil/data_generator.py:225: batch (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.batch(batch_size) (or padded_batch(...) if dynamic_pad=True).
Generating image processing ops
Batching images
TIMER:mil.py:init_network: building TF network (Elapsed: 9.237436s)
Traceback (most recent call last):
File "
/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 2673, in gather
return params.sparse_read(indices, name=name)
AttributeError: 'Tensor' object has no attribute 'sparse_read'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 510, in _apply_op_helper
preferred_dtype=default_dtype)
File "
/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1146, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 229, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "
/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 208, in constant
value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "~/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 451, in make_tensor_proto
_GetDenseDimensions(values)))
ValueError: Argument must be a dense tensor: range(0, 20) - got shape [20], but wanted [].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 524, in _apply_op_helper
values, as_ref=input_arg.is_ref).dtype.name
File "
/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1146, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 229, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "
/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 208, in constant
value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "~/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 451, in make_tensor_proto
_GetDenseDimensions(values)))
ValueError: Argument must be a dense tensor: range(0, 20) - got shape [20], but wanted [].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "main.py", line 314, in
main()
File "main.py", line 280, in main
model.init_network(graph, input_tensors=train_input_tensors, restore_iter=FLAGS.restore_iter)
File "/daml/mil/mil.py", line 36, in init_network
result = self.construct_model(input_tensors=input_tensors, prefix=prefix, dim_input=self._dO, dim_output=self._dU, network_config=self.network_params)
File "
/daml/mil/mil.py", line 630, in construct_model
unused = batch_metalearn((inputa[0], inputb[0], actiona[0], actionb[0]))
File "/daml/mil/mil.py", line 516, in batch_metalearn
local_outputa, final_eept_preda = self.forward((inputa), state_inputa, weights, network_config=network_config)
File "
/daml/mil/mil.py", line 252, in forward
context = tf.transpose(tf.gather(tf.transpose(tf.zeros_like(flatten_image)), range(FLAGS.bt_dim)))
File "/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 2675, in gather
return gen_array_ops.gather_v2(params, indices, axis, name=name)
File "/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 3332, in gather_v2
"GatherV2", params=params, indices=indices, axis=axis, name=name)
File "
/daml/mil/.env/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 528, in _apply_op_helper
(input_name, err))
ValueError: Tried to convert 'indices' to a tensor and failed. Error: Argument must be a dense tensor: range(0, 20) - got shape [20], but wanted [].
./scripts/run_sim_push_video_only.sh: line 8: --temporal_num_layers=3: command not found
(.env) user@daml:~/daml/mil$
`

I suspect there is something wrong with the code.

I found there's a serious problem with the code:the environments of the "reach" test demos are exactly the same with the gym environments, it means that the gym environments know the results(states and actions) in advance. In other words, the results maybe directly come from the "reach" test demos.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.