google-deepmind / gqn-datasets Goto Github PK
View Code? Open in Web Editor NEWDatasets used to train Generative Query Networks (GQNs) in the ‘Neural Scene Representation and Rendering’ paper.
License: Apache License 2.0
Datasets used to train Generative Query Networks (GQNs) in the ‘Neural Scene Representation and Rendering’ paper.
License: Apache License 2.0
Hi,
I have searched for quite a long time now and I'm looking for a fast and efficient way of reading your dataset without tensorflow. I could indeed use a minimum of tensorflow code, but what I've seen is that we are forced to run the DataReader.read
method inside a tensorflow session.
I've looked into solutions like using this code https://github.com/pgmmpk/tfrecord, but it's handled a different way and the data wrongly decoded.
Do you have recommendations on how to use the dataset without or with minimal tensorflow code?
Thanks in advance.
I can't find any reference to pytflib on Google or PyPi. Is that an internal thing. I found a commit that's replacing sonnet.python.ops.next with pytflib.nest. What's the equivalent of pytflib.nest.map_structure
in sonnet?
Thanks
Is it free to download gqn datasets?
And should I need to apply to another new google cloud bucket?
Hello, it has been a while so I would like to ask is there any plan to release the public code as long as the trained model ?
Hey guys,
I converted the entire dataset to numpy and I was wondering if you'd like to integrate that into the official bucket. My free 300$ will be used up at some point :D Data to be found here. Let me know what you think.
Jens
Hi,
I was wondering how to download the GQN training and test data from https://console.cloud.google.com/storage/browser/gqn-dataset/
There is no obvious download link for the directories. The tfrecord files can be downloaded individually, but this is clearly infeasible for 10k+ files. Is there an easier way to do this without having to scrape the URLs?
Thanks,
Oliver
Hello guys, I am working on GQN dataset and I would like to know about the camera intrinsic parameter for the room_ring_camera dataset ?
In the paper, it says that :"Images are rendered using MuJoCo’s default OpenGL renderer" so I guess camera parameters can be shared for future research.
Dear Fabio Viola,
I checked the gqn dataset from https://github.com/deepmind/gqn-datasets. Shepard_metzler_7_parts contains 900 tfrecords for training. Each tfrecord has 20 scenes. So there are only 18000 scenes. For Mazes, there are 1080 records with 100 scenes for each record, which means 108000 in total.
So I think this link just contains a part of the whole data, right? If so, could you send the link for the whole dataset? Many thanks ion advance!
Best,
Bing
Hello fabioviola,
I was wondering how to download the GQN training and test data from the link
https://console.cloud.google.com/storage/browser/gqn-dataset/.
Even you have give some suggestions, I am still stuck (windows platform).
Is there an another way to download this dataset?
Thank you.
Mingjia
Hi I had 2 questions pertaining to the dataset:
The paper mentions 'top-down views' of the maze configurations. Are these views also included in the maze dataset, and if so, at which file indices would I be able to find them?
I am trying to normalize the datasets and finding their means and variances is taking about a full day per dataset. If the authors already have this information, would it be possible to know the mean and standard deviation of each training set, for each RGB channel?
Thanks for your help!
Hi,
In the DataReader instantiation of the README.md code snippet, the attribute should be dataset='jaco'
and not version='jaco'
.
Hi,
I think root_path
should be a string in the README.md code snippet as it is stated in the DataReader documentation and not an Ellipsis.
Hi,
I am a little curious about the resolution of rendering images.
Do you have tried higher resolution?
Or the training process of high resolution is slower, thus 64x64 is used in the paper.
Have you considered realising the environment you used?
It seems the _get_dataset_files
method generates filenames from 0
to num_files - 1
. e.g.
root/shepard_metzler_5_parts/train/000-of-900.tfrecord
to
root/shepard_metzler_5_parts/train/899-of-900.tfrecord
however the files on google cloud are numbered from 1
to num_files
root/shepard_metzler_5_parts/train/001-of-900.tfrecord
to
root/shepard_metzler_5_parts/train/900-of-900.tfrecord
this causes training to crash when the program tries to access the missing 0th file.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.