This repository has gone stale as I unfortunately do not have the time to maintain it anymore. If you would like to continue the development of it as a collaborator send me an email at [email protected].
Collection of PyTorch implementations of Generative Adversarial Network varieties presented in research papers. Model architectures will not always mirror the ones proposed in the papers, but I have chosen to focus on getting the core ideas covered instead of getting every layer configuration right. Contributions and suggestions of GANs to implement are very welcomed.
See also: Keras-GAN
- Installation
- Implementations
- Auxiliary Classifier GAN
- Adversarial Autoencoder
- BEGAN
- BicycleGAN
- Boundary-Seeking GAN
- Cluster GAN
- Conditional GAN
- Context-Conditional GAN
- Context Encoder
- Coupled GAN
- CycleGAN
- Deep Convolutional GAN
- DiscoGAN
- DRAGAN
- DualGAN
- Energy-Based GAN
- Enhanced Super-Resolution GAN
- GAN
- InfoGAN
- Least Squares GAN
- MUNIT
- Pix2Pix
- PixelDA
- Relativistic GAN
- Semi-Supervised GAN
- Softmax GAN
- StarGAN
- Super-Resolution GAN
- UNIT
- Wasserstein GAN
- Wasserstein GAN GP
- Wasserstein GAN DIV
$ git clone https://github.com/eriklindernoren/PyTorch-GAN
$ cd PyTorch-GAN/
$ sudo pip3 install -r requirements.txt
Auxiliary Classifier Generative Adversarial Network
Augustus Odena, Christopher Olah, Jonathon Shlens
Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7% of the classes have samples exhibiting diversity comparable to real ImageNet data.
$ cd implementations/acgan/
$ python3 acgan.py
Adversarial Autoencoder
Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, Brendan Frey
n this paper, we propose the "adversarial autoencoder" (AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution. Matching the aggregated posterior to the prior ensures that generating from any part of prior space results in meaningful samples. As a result, the decoder of the adversarial autoencoder learns a deep generative model that maps the imposed prior to the data distribution. We show how the adversarial autoencoder can be used in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction and data visualization. We performed experiments on MNIST, Street View House Numbers and Toronto Face datasets and show that adversarial autoencoders achieve competitive results in generative modeling and semi-supervised classification tasks.
$ cd implementations/aae/
$ python3 aae.py
BEGAN: Boundary Equilibrium Generative Adversarial Networks
David Berthelot, Thomas Schumm, Luke Metz
We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.
$ cd implementations/began/
$ python3 began.py
Toward Multimodal Image-to-Image Translation
Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A. Efros, Oliver Wang, Eli Shechtman
Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a \emph{distribution} of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.
$ cd data/
$ bash download_pix2pix_dataset.sh edges2shoes
$ cd ../implementations/bicyclegan/
$ python3 bicyclegan.py
Various style translations by varying the latent code.
Boundary-Seeking Generative Adversarial Networks
R Devon Hjelm, Athul Paul Jacob, Tong Che, Adam Trischler, Kyunghyun Cho, Yoshua Bengio
Generative adversarial networks (GANs) are a learning framework that rely on training a discriminator to estimate a measure of difference between a target and generated distributions. GANs, as normally formulated, rely on the generated samples being completely differentiable w.r.t. the generative parameters, and thus do not work for discrete data. We introduce a method for training GANs with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, thus providing a policy gradient for training the generator. The importance weights have a strong connection to the decision boundary of the discriminator, and we call our method boundary-seeking GANs (BGANs). We demonstrate the effectiveness of the proposed algorithm with discrete image and character-based natural language generation. In addition, the boundary-seeking objective extends to continuous data, which can be used to improve stability of training, and we demonstrate this on Celeba, Large-scale Scene Understanding (LSUN) bedrooms, and Imagenet without conditioning.
$ cd implementations/bgan/
$ python3 bgan.py
ClusterGAN: Latent Space Clustering in Generative Adversarial Networks
Sudipto Mukherjee, Himanshu Asnani, Eugene Lin, Sreeram Kannan
Generative Adversarial networks (GANs) have obtained remarkable success in many unsupervised learning tasks and unarguably, clustering is an important unsupervised learning problem. While one can potentially exploit the latent-space back-projection in GANs to cluster, we demonstrate that the cluster structure is not retained in the GAN latent space. In this paper, we propose ClusterGAN as a new mechanism for clustering using GANs. By sampling latent variables from a mixture of one-hot encoded variables and continuous latent variables, coupled with an inverse network (which projects the data to the latent space) trained jointly with a clustering specific loss, we are able to achieve clustering in the latent space. Our results show a remarkable phenomenon that GANs can preserve latent space interpolation across categories, even though the discriminator is never exposed to such vectors. We compare our results with various clustering baselines and demonstrate superior performance on both synthetic and real datasets.
Code based on a full PyTorch [implementation].
$ cd implementations/cluster_gan/
$ python3 clustergan.py
Conditional Generative Adversarial Nets
Mehdi Mirza, Simon Osindero
Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.
$ cd implementations/cgan/
$ python3 cgan.py
Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks
Emily Denton, Sam Gross, Rob Fergus
We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss. Images with random patches removed are presented to a generator whose task is to fill in the hole, based on the surrounding pixels. The in-painted images are then presented to a discriminator network that judges if they are real (unaltered training images) or not. This task acts as a regularizer for standard supervised training of the discriminator. Using our approach we are able to directly train large VGG-style networks in a semi-supervised fashion. We evaluate on STL-10 and PASCAL datasets, where our approach obtains performance comparable or superior to existing methods.
$ cd implementations/ccgan/
$ python3 ccgan.py
Context Encoders: Feature Learning by Inpainting
Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, Alexei A. Efros
We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.
$ cd implementations/context_encoder/
<follow steps at the top of context_encoder.py>
$ python3 context_encoder.py
Rows: Masked | Inpainted | Original | Masked | Inpainted | Original
Coupled Generative Adversarial Networks
Ming-Yu Liu, Oncel Tuzel
We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation.
$ cd implementations/cogan/
$ python3 cogan.py
Generated MNIST and MNIST-M images
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A. Efros
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G:X→Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F:Y→X and introduce a cycle consistency loss to push F(G(X))≈X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.
$ cd data/
$ bash download_cyclegan_dataset.sh monet2photo
$ cd ../implementations/cyclegan/
$ python3 cyclegan.py --dataset_name monet2photo
Monet to photo translations.
Deep Convolutional Generative Adversarial Network
Alec Radford, Luke Metz, Soumith Chintala
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.
$ cd implementations/dcgan/
$ python3 dcgan.py
Learning to Discover Cross-Domain Relations with Generative Adversarial Networks
Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, Jiwon Kim
While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity.
$ cd data/
$ bash download_pix2pix_dataset.sh edges2shoes
$ cd ../implementations/discogan/
$ python3 discogan.py --dataset_name edges2shoes
Rows from top to bottom: (1) Real image from domain A (2) Translated image from
domain A (3) Reconstructed image from domain A (4) Real image from domain B (5)
Translated image from domain B (6) Reconstructed image from domain B
On Convergence and Stability of GANs
Naveen Kodali, Jacob Abernethy, James Hays, Zsolt Kira
We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse. We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points. We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN. We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions.
$ cd implementations/dragan/
$ python3 dragan.py
DualGAN: Unsupervised Dual Learning for Image-to-Image Translation
Zili Yi, Hao Zhang, Ping Tan, Minglun Gong
Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation, we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.
$ cd data/
$ bash download_pix2pix_dataset.sh facades
$ cd ../implementations/dualgan/
$ python3 dualgan.py --dataset_name facades
Energy-based Generative Adversarial Network
Junbo Zhao, Michael Mathieu, Yann LeCun
We introduce the "Energy-based Generative Adversarial Network" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions. Similar to the probabilistic GANs, a generator is seen as being trained to produce contrastive samples with minimal energies, while the discriminator is trained to assign high energies to these generated samples. Viewing the discriminator as an energy function allows to use a wide variety of architectures and loss functionals in addition to the usual binary classifier with logistic output. Among them, we show one instantiation of EBGAN framework as using an auto-encoder architecture, with the energy being the reconstruction error, in place of the discriminator. We show that this form of EBGAN exhibits more stable behavior than regular GANs during training. We also show that a single-scale architecture can be trained to generate high-resolution images.
$ cd implementations/ebgan/
$ python3 ebgan.py
ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks
Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Chen Change Loy, Yu Qiao, Xiaoou Tang
The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. The code is available at this https URL.
$ cd implementations/esrgan/
<follow steps at the top of esrgan.py>
$ python3 esrgan.py
Nearest Neighbor Upsampling | ESRGAN
Generative Adversarial Network
Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.
$ cd implementations/gan/
$ python3 gan.py
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, Pieter Abbeel
This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.
$ cd implementations/infogan/
$ python3 infogan.py
Result of varying categorical latent variable by column.
Result of varying continuous latent variable by row.
Least Squares Generative Adversarial Networks
Xudong Mao, Qing Li, Haoran Xie, Raymond Y.K. Lau, Zhen Wang, Stephen Paul Smolley
Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson χ2 divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We evaluate LSGANs on five scene datasets and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by regular GANs. We also conduct two comparison experiments between LSGANs and regular GANs to illustrate the stability of LSGANs.
$ cd implementations/lsgan/
$ python3 lsgan.py
Multimodal Unsupervised Image-to-Image Translation
Xun Huang, Ming-Yu Liu, Serge Belongie, Jan Kautz
Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any pairs of corresponding images. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to the state-of-the-art approaches further demonstrates the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at this https URL
$ cd data/
$ bash download_pix2pix_dataset.sh edges2shoes
$ cd ../implementations/munit/
$ python3 munit.py --dataset_name edges2shoes
Results by varying the style code.
Unpaired Image-to-Image Translation with Conditional Adversarial Networks
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A. Efros
We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.
$ cd data/
$ bash download_pix2pix_dataset.sh facades
$ cd ../implementations/pix2pix/
$ python3 pix2pix.py --dataset_name facades
Rows from top to bottom: (1) The condition for the generator (2) Generated image
based of condition (3) The true corresponding image to the condition
Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks
Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, Dilip Krishnan
Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images often fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that attempt to map representations between the two domains or learn to extract features that are domain-invariant. In this work, we present a new approach that learns, in an unsupervised manner, a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.
Trains a classifier on images that have been translated from the source domain (MNIST) to the target domain (MNIST-M) using the annotations of the source domain images. The classification network is trained jointly with the generator network to optimize the generator for both providing a proper domain translation and also for preserving the semantics of the source domain image. The classification network trained on translated images is compared to the naive solution of training a classifier on MNIST and evaluating it on MNIST-M. The naive model manages a 55% classification accuracy on MNIST-M while the one trained during domain adaptation achieves a 95% classification accuracy.
$ cd implementations/pixelda/
$ python3 pixelda.py
Method | Accuracy |
---|---|
Naive | 55% |
PixelDA | 95% |
Rows from top to bottom: (1) Real images from MNIST (2) Translated images from
MNIST to MNIST-M (3) Examples of images from MNIST-M
The relativistic discriminator: a key element missing from standard GAN
Alexia Jolicoeur-Martineau
In standard generative adversarial network (SGAN), the discriminator estimates the probability that the input data is real. The generator is trained to increase the probability that fake data is real. We argue that it should also simultaneously decrease the probability that real data is real because 1) this would account for a priori knowledge that half of the data in the mini-batch is fake, 2) this would be observed with divergence minimization, and 3) in optimal settings, SGAN would be equivalent to integral probability metric (IPM) GANs. We show that this property can be induced by using a relativistic discriminator which estimate the probability that the given real data is more realistic than a randomly sampled fake data. We also present a variant in which the discriminator estimate the probability that the given real data is more realistic than fake data, on average. We generalize both approaches to non-standard GAN loss functions and we refer to them respectively as Relativistic GANs (RGANs) and Relativistic average GANs (RaGANs). We show that IPM-based GANs are a subset of RGANs which use the identity function. Empirically, we observe that 1) RGANs and RaGANs are significantly more stable and generate higher quality data samples than their non-relativistic counterparts, 2) Standard RaGAN with gradient penalty generate data of better quality than WGAN-GP while only requiring a single discriminator update per generator update (reducing the time taken for reaching the state-of-the-art by 400%), and 3) RaGANs are able to generate plausible high resolutions images (256x256) from a very small sample (N=2011), while GAN and LSGAN cannot; these images are of significantly better quality than the ones generated by WGAN-GP and SGAN with spectral normalization.
$ cd implementations/relativistic_gan/
$ python3 relativistic_gan.py # Relativistic Standard GAN
$ python3 relativistic_gan.py --rel_avg_gan # Relativistic Average GAN
Semi-Supervised Generative Adversarial Network
Augustus Odena
We extend Generative Adversarial Networks (GANs) to the semi-supervised context by forcing the discriminator network to output class labels. We train a generative model G and a discriminator D on a dataset with inputs belonging to one of N classes. At training time, D is made to predict which of N+1 classes the input belongs to, where an extra class is added to correspond to the outputs of G. We show that this method can be used to create a more data-efficient classifier and that it allows for generating higher quality samples than a regular GAN.
$ cd implementations/sgan/
$ python3 sgan.py
Softmax GAN
Min Lin
Softmax GAN is a novel variant of Generative Adversarial Network (GAN). The key idea of Softmax GAN is to replace the classification loss in the original GAN with a softmax cross-entropy loss in the sample space of one single batch. In the adversarial learning of N real training samples and M generated samples, the target of discriminator training is to distribute all the probability mass to the real samples, each with probability 1M, and distribute zero probability to generated data. In the generator training phase, the target is to assign equal probability to all data points in the batch, each with probability 1M+N. While the original GAN is closely related to Noise Contrastive Estimation (NCE), we show that Softmax GAN is the Importance Sampling version of GAN. We futher demonstrate with experiments that this simple change stabilizes GAN training.
$ cd implementations/softmax_gan/
$ python3 softmax_gan.py
StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, Jaegul Choo
Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks.
$ cd implementations/stargan/
<follow steps at the top of stargan.py>
$ python3 stargan.py
Original | Black Hair | Blonde Hair | Brown Hair | Gender Flip | Aged
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi
Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.
$ cd implementations/srgan/
<follow steps at the top of srgan.py>
$ python3 srgan.py
Nearest Neighbor Upsampling | SRGAN
Unsupervised Image-to-Image Translation Networks
Ming-Yu Liu, Thomas Breuel, Jan Kautz
Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs. We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on benchmark datasets. Code and additional results are available in this https URL.
$ cd data/
$ bash download_cyclegan_dataset.sh apple2orange
$ cd implementations/unit/
$ python3 unit.py --dataset_name apple2orange
Wasserstein GAN
Martin Arjovsky, Soumith Chintala, Léon Bottou
We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical work highlighting the deep connections to other distances between distributions.
$ cd implementations/wgan/
$ python3 wgan.py
Improved Training of Wasserstein GANs
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron Courville
Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.
$ cd implementations/wgan_gp/
$ python3 wgan_gp.py
Wasserstein Divergence for GANs
Jiqing Wu, Zhiwu Huang, Janine Thoma, Dinesh Acharya, Luc Van Gool
In many domains of computer vision, generative adversarial networks (GANs) have achieved great success, among which the fam- ily of Wasserstein GANs (WGANs) is considered to be state-of-the-art due to the theoretical contributions and competitive qualitative performance. However, it is very challenging to approximate the k-Lipschitz constraint required by the Wasserstein-1 metric (W-met). In this paper, we propose a novel Wasserstein divergence (W-div), which is a relaxed version of W-met and does not require the k-Lipschitz constraint.As a concrete application, we introduce a Wasserstein divergence objective for GANs (WGAN-div), which can faithfully approximate W-div through optimization. Under various settings, including progressive growing training, we demonstrate the stability of the proposed WGAN-div owing to its theoretical and practical advantages over WGANs. Also, we study the quantitative and visual performance of WGAN-div on standard image synthesis benchmarks, showing the superior performance of WGAN-div compared to the state-of-the-art methods.
pytorch-gan's People
Forkers
hyzcn tlbtlbtlb edbeeching petronetto ml-lab user01 stevenlol llcf jt827859032 airyym amwons tiruss locussam hellogiantman1989 mzk665 stevehamwu hsuxu binliang-nlp hfxunlp a532233648 wangxiao5791509 1248918546 hncz003 morindaz shubhampachori12110095 minglangqiao valaentine lysh liujiahao11 yanghaha11514 avidlearnerinprogress cclauss jinyeong oppa3109 danilecug mathfinder justin0111 wpf535236337 duke24k rownine 993917172 qq1323 magicknight ustc-miner testajanoni zy20091082 kenqyu ystallonne gonwalk zjtgit locosoft1986 lving tobimaru coorung arf111 nikolayvoronchikhin vinaygupta1234 prefantasy steve7an godfatherace ersks milog17 abiraja2004 prashantabzooba mansurul11 lalalland mdmustafizurrahman tangyoubao minchaokang goooq duyeonee sarathknv mywmiss se7enzhou oxleyobjects cu-noyvirt linan7788626 anieastking madhur-tandon kwangjinoh zzw1123 jliangnku ykwon0407 lochuynh1989 framework-learner heiidi knhuq abhishek-jatram aliendeep robmsmt ml-ai-nlp-ir dingchenwei williamlwj cv-apps zbxzc35 bob48523 gongxinyuu cedrickchee terrygu0908 yangwangxpytorch-gan's Issues
Why adding channels here with input size?
query in Energy Based GAN (EBGAN)
Hi
Thank you for your wonderful effort in implementing so many papers.
I have a query regarding your EBGAN implementation.
https://github.com/eriklindernoren/PyTorch-GAN/blob/master/implementations/ebgan/ebgan.py
In line 175
when you are optimizing the generator G
why is the pixelwise_loss
computed using gen_imgs.detach()
and why not simply gen_imgs()
? If we do .detach()
while updating G
then the pixelwise_loss
will not contribute any gradients towards optimizing the generator weights. Is it the right way to do that ?
Please clarify my doubt.
Thank You in Advance !
Training ACGAN
Sorry this is not an issue. I have some questions regarding the implementation of ACGAN and training ACGAN.
- In your implementation the encoded label vector is multiplied with the noise vector and given as the input to the G. But shouldn't it be concatenated?
- The CrossEntropy loss in PyTorch already includes a softmax function. Therefore I am unclear whether a softmax function should be included in the Discriminator or not.
- I am unclear about when to stop training. It seems (https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html) the accuracy of the D for real vs fake images will be initially high for the real images, but low for the fake images. However, when I run ACGAN both are high initially (over 95%) and then reduces to ~70%. This accuracy does not go below 60%. So do you think my training is correct? Also I am unclear about when to stop training. May be this accuracy close to 50%, or the loss of the D converges, or loss of the G converges..?
How to use multiple GPUs to train cycleGAN
CycleGan error: weight of size [64, 3, 7, 7], expected input[1, 1, 262, 262] to have 3 channels, but got 1 channels instead
I am having this error using horse2zebra
, I have checked all the input images and they all have the size [3, 265, 265]
, so, most probable that the error is caused by G_AB(real_B)
. I have, however, tried cyclegan
with cifar-100
and monet2photo
and everything went fine. I am using PyTorch 0.4
and Python 3.6
.
line 157, in <module>
loss_id_B = criterion_identity(G_AB(real_B), real_B)
...
Given groups=1, weight of size [64, 3, 7, 7], expected input[1, 1, 262, 262] to have 3 channels, but got 1 channels instead
inconsistent descending compared with Algorithm 1 of WGAN-div
@eriklindernoren the loss of the discriminator within the current implementation seems not consistent with the Algorithm 1 of WGAN-div.
no results of CycleGAN
I run CycleGAN following all commands the author gave. But there is empty in images and saved_models. I tested that the cyclegan.py didn't enter the 140 line "for i, batch in enumerate(dataloader):".
Any advices is appreciated.
WGAN implementation error
In your WGAN implementation, you have your n-critique loop around the Generator learning when it should actually be around the Discriminator's. Critique stands for Discriminator.
How much GPU resources
Hi,
I was wondering which GPUs did you use for cycleGAN?
Thanks!
NameError: name 'FeatureExtractor' is not defined
When evaluating the ESRGAN and SRGAN I can see that the class FeatureExtractor() is not defined anywhere. I can see latest commit is 13 days ago, so I assume you are currently working on implementing these models?
WGAN-GP gradient penalty not calculated correctly
The L2 norm of the gradient penalty term in WGAN-GP should be calculated across all dimensions of an image, but the current implementation calculates it across each dimension of an image separately (i.e., the absolute value of each pixel in an image is calculated).
Indeed, in the following line, gradients
is a tensor of size (batch_size, nb_channels, img_width, img_height)
:
gradient_penalty = ((gradients.norm(2, dim=1) - 1) ** 2).mean()
To solve the issue, the 4-dimensional tensor containing the gradients should be flattened across the last 3 dimensions:
gradients = gradients.view(real_samples.size(0), -1)
gradient_penalty = ((gradients.norm(2, dim=1) - 1) ** 2).mean()
loss_D about wgan
Possible error of began
In the line 161 of the 'began' implementation, should that be...?
g_loss = torch.mean(torch.abs(discriminator(gen_imgs) - real_imgs))
instead of
g_loss = torch.mean(torch.abs(discriminator(gen_imgs) - gen_imgs))
Please correct me if I miss something. Many thanks!
The channel size does not match the broadcast channel size DCgans
tensor.sub_(mean[:, None, None]).div_(std[:, None, None])
RuntimeError: output with shape [1, 32, 32] doesn't match the broadcast shape [3, 32, 32]
When I was runing the DCGANS example, it gives me this error.
wgan issuse
when I run python3 wgan.py, appears print ("[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f]" % (epoch, opt.n_epochs,batches_done % len(dataloader), len(dataloader),d_loss.item(), gen_validity.item()))
ValueError: only one element tensors can be converted to Python scalars
how can i fix it?
Runtime error in the original GAN
It raises RuntimeError: cannot join current thread when the whole training process ends.
Nothing was changed in the code.
SRGAN's data download link address has expired
Link of img_align_celeba.zip is turn off by dropbox.
Hi there, thanks very much for the wonderful repo.
When I want to download the img_align_celeba.zip from Dropbox, I found the link is turned off. So can you update the link or share me a private link for downloading the dataset?
Thanks very much.
wgan-gp
hi, i believe the implementation of wgan-gp is buggy. the interpolation in
uses a random number for each pixel, whereas the pseudocode in the paper says to use a random number for each example.i believe the line should be replaced by
alpha = Tensor(np.random.random(size=(real_samples.shape[0], 1, 1, 1)))
About the Identity loss in cyclegan.py
The source code of Identity loss is shown below:
loss_id_A = criterion_identity(G_BA(real_A), real_A)
loss_id_B = criterion_identity(G_AB(real_B), real_B)
This seems a little bit weird to me, maybe it should be:
loss_id_A = criterion_identity(G_AB(real_A), real_A)
loss_id_B = criterion_identity(G_BA(real_B), real_B)
Saved model for inference
Hi @eriklindernoren and all,
Thanks to all contributor for the awesome repository.
Variable object has no attribute 'item '
PyTorch-GAN/implementations/gan/gan.py
Line 154 in 3a00900
Traceback (most recent call last):
File "gan/gan.py", line 154, in
d_loss.item(), g_loss.item()))
File "/usr/local/lib/python3.5/dist-packages/torch/autograd/variable.py", line 67, in getattr
return object.getattribute(self, name)
AttributeError: 'Variable' object has no attribute 'item'
Dataset in SRGAN
The dataset in the Drobox can't be downloaded.
Runtime error
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
Namespace(b1=0.5, b2=0.999, batch_size=1, channels=3, checkpoint_interval=-1, dataset_name='facades', decay_epoch=100, epoch=0, img_height=256, img_width=256, lr=0.0002, n_cpu=8, n_epochs=200, sample_interval=500)
Namespace(b1=0.5, b2=0.999, batch_size=1, channels=3, checkpoint_interval=-1, dataset_name='facades', decay_epoch=100, epoch=0, img_height=256, img_width=256, lr=0.0002, n_cpu=8, n_epochs=200, sample_interval=500)
Abnormal to me, this line was printed twice
Are the two model acgan.py and sgan.py the same?
Add GAN implementation in NLP field
2 small suggestions
-
It can be summarized in the chronological order of the papers rather than the first letter of the model so that it is easier to study the related development of GAN more clearly.
-
It would be better if you can briefly summarize the connections and differences between the models in papers.
About the embedding in CGAN
Hi, I have a question about the cgan implementation.
In your code, you use nn.embedding to embed the prior labels. The problem is, when the learnable weights are not specified, the vocabulary will be randomly initialized.
In both generator and discriminator, you use two different nn.embedding, and they are initialized differently. However, when we generate a fake image, we use one embedding, but when we use discriminator to distinguish the fake image, we use another embedding. Will this have effect on the final performance?
I am not very familiar with GAN. But I just think this is strange. It's true that we still use the same labels, but the actual embeddings are different. I think using the same embedding for the discriminator and generator will be more reasonable?
loss_D in SRGAN
Hello and my thanks for this great repo. I like how your code is simple and effective.
I would like to point out that you are using MSE for your Discriminator Loss instead of Binary Cross Entropy. If you have a specific reason for why you are doing that, could you share it?
no use auxiliary_loss in cGAN
hi, i am wonder in cgan. the auxiliary_loss is not use in optimizer_G.step(). But the CGAN can training normal and get correct result. I think, Maybe i overlook some significant details. So who can give me some tips. thanks !
Possible error in code
In line 265 the code does not look right :
code_input = Variable(FloatTensor(np.random.normal(-1, 1, (batch_size, opt.code_dim))))
instead of 'normal' shouldn't be 'uniform'?
- Mirtha
RuntimeError:"freeze_support()"
Thank you very much for sharing such a great code!Will it be better to use if __name__ == '__main__':
?
patchGAN
What exactly is it? Here can I learn about it?
How to use WGAN_gp to generate pics of myself
hi,your code is so amazing.I want to use your WGAN_gp code to generate my images
Brightness problem in SRGAN
I notice that the generated images have higher brightness and more colors than the original image, or the resulting images of other approaches. What causes it?
some question about cyclegan.py
I'm a little confused about why use mse_loss in GAN loss and use L1 loss in Cycle loss and Identity loss,and i didn't find this in the paper.
And the second question is why use the fake_A_buffer.push_and_pop() ,it seems to do something like if the len<50 do nothing,and when len>50,the part of >50 do the random choice the sample? i really confused about this
Channel first issue
Where have you told pytorch that you are going to use channel first image? Previously I used channel last image with pytorch.
PyTorch-GAN/implementations/cgan/cgan.py
Line 34 in f4c14d1
Problem in DCGAN
DCGAN fails learning the mnist dataset. Is there a problem in implementation.
WGAN GP detach is necessary?
@eriklindernoren I think you should put .detach() to real_samples and fake_samples. Isn't it?
What is the (*) for??
PyTorch-GAN/implementations/gan/gan.py
Line 47 in 9e3ac57
Issue in CycleGan-Pix2Pix while calling Discriminator()
While calling Discriminator() from models.py in cycleGan and pix2pix, I get a syntax error response as ;
Traceback (most recent call last):
File "cyclegan.py", line 15, in <module>
from models import *
File "./PyTorch-GAN/implementations/cyclegan/models.py", line 167
*discriminator_block(64, 128, 2, True),
^
SyntaxError: invalid syntax
This happens with both python3 and python2.7.
Looks like dereferencing does not work and I could not find a way to make it work.
Any bits of advice?
Thanks
Wrong Shared Parameters in Coupled GAN
@eriklindernoren According to the Paper the coupled discriminators should share the parameters of the last layers, but you implemented it to share the parameters of the first layers (just like the generator does).
AttributeError: module 'torchvision.transforms' has no attribute 'Resize'
I'm running the srgan.py
implementation and receive the following error:
Namespace(b1=0.5, b2=0.999, batch_size=1, channels=3, checkpoint_interval=-1, dataset_name='img_align_celeba', decay_epoch=100, epoch=0, hr_height=256, hr_width=256, lr=0.0002, n_cpu=8, n_epochs=200, sample_interval=100)
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /Users/Username/.torch/models/vgg19-dcbb9e9d.pth
100%|███████████████████████████████████████████| 574673361/574673361 [00:34<00:00, 16895858.86it/s]
Traceback (most recent call last):
File "srgan.py", line 104, in <module>
lr_transforms = [ transforms.Resize((opt.hr_height//4, opt.hr_height//4), Image.BICUBIC),
AttributeError: module 'torchvision.transforms' has no attribute 'Resize'
If you need any additional information let me know...
UserWarning: nn.Upsample is deprecated.
Hi~
I found an error while using InfoGAN. The error is as follows. Hope to repair.
/usr/local/lib/python3.7/site-packages/torch/nn/modules/upsampling.py:129: UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name))
/usr/local/lib/python3.7/site-packages/torch/nn/modules/container.py:92: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
input = module(input)
Thank you for writing the code, gave me a lot of inspiration!
I was denied to access your git
I can't git clone https://github.com/eriklindernoren/PyTorch-GAN
Error message is below.
fatal: could not create work tree dir 'PyTorch-GAN'
Why does this error happen?
Noise input
Original paper used a noise with the input of the generator. why did you not use it?
Switched index in loading dataset?
Hello, I noticed that within your implementation of pix2pix, Datasets will return image in form of like this
But when reading it again in training loop, it was written like
PyTorch-GAN/implementations/pix2pix/pix2pix.py
Lines 127 to 128 in fd9f071
This happened multiple times within pix2pix.py
when loading the image. Is this switching intentional?
wgan retrain_graph
Running wgan.py:
Traceback (most recent call last):
File "wgan.py", line 179, in <module>
gen_validity.backward(valid)
File "/home/alcaster/.pyenv/versions/ml/lib/python3.6/site-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/alcaster/.pyenv/versions/ml/lib/python3.6/site-packages/torch/autograd/__init__.py", line 89, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
My versions of packages:
torch==0.4.0
torchvision==0.2.1
Classify the generated image
I run your code about cgan and the generated image looks quite perfect. However, when I tried to classified the generated image by another network (Resnet18), the predicted label is always 'eight'. Is this a common feature of cgan?
Possible error of relativistic gan
if opt.rel_avg_gan:
g_loss = adversarial_loss(fake_pred - real_pred.mean(0, keepdim=True), valid)
else:
g_loss = adversarial_loss(fake_pred - real_pred, valid)
# Loss measures generator's ability to fool the discriminator
g_loss = adversarial_loss(discriminator(gen_imgs), valid)
g_loss.backward()
optimizer_G.step()
Is this expected? Does it look like g_loss
is getting overwritten?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.