Giter Club home page Giter Club logo

flysystem-google-cloud-storage's Introduction

flysystem-google-cloud-storage

A Google Cloud Storage adapter for flysystem - a PHP filesystem abstraction.

Author Build Status StyleCI Software License Packagist Version Total Downloads

Installation

composer require superbalist/flysystem-google-storage

Integrations

Want to get started quickly? Check out some of these integrations:

Usage

use Google\Cloud\Storage\StorageClient;
use League\Flysystem\Filesystem;
use Superbalist\Flysystem\GoogleStorage\GoogleStorageAdapter;

/**
 * The credentials will be auto-loaded by the Google Cloud Client.
 *
 * 1. The client will first look at the GOOGLE_APPLICATION_CREDENTIALS env var.
 *    You can use ```putenv('GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json');``` to set the location of your credentials file.
 *
 * 2. The client will look for the credentials file at the following paths:
 * - windows: %APPDATA%/gcloud/application_default_credentials.json
 * - others: $HOME/.config/gcloud/application_default_credentials.json
 *
 * If running in Google App Engine, the built-in service account associated with the application will be used.
 * If running in Google Compute Engine, the built-in service account associated with the virtual machine instance will be used.
 */

$storageClient = new StorageClient([
    'projectId' => 'your-project-id',
]);
$bucket = $storageClient->bucket('your-bucket-name');

$adapter = new GoogleStorageAdapter($storageClient, $bucket);

$filesystem = new Filesystem($adapter);

/**
 * The credentials are manually specified by passing in a keyFilePath.
 */

$storageClient = new StorageClient([
    'projectId' => 'your-project-id',
    'keyFilePath' => '/path/to/service-account.json',
]);
$bucket = $storageClient->bucket('your-bucket-name');

$adapter = new GoogleStorageAdapter($storageClient, $bucket);

$filesystem = new Filesystem($adapter);

// write a file
$filesystem->write('path/to/file.txt', 'contents');

// update a file
$filesystem->update('path/to/file.txt', 'new contents');

// read a file
$contents = $filesystem->read('path/to/file.txt');

// check if a file exists
$exists = $filesystem->has('path/to/file.txt');

// delete a file
$filesystem->delete('path/to/file.txt');

// rename a file
$filesystem->rename('filename.txt', 'newname.txt');

// copy a file
$filesystem->copy('filename.txt', 'duplicate.txt');

// delete a directory
$filesystem->deleteDir('path/to/directory');

// see http://flysystem.thephpleague.com/api/ for full list of available functionality

Google Storage specifics

When using a custom storage uri the bucket name will not prepended to the file path.

$storageClient = new StorageClient([
    'projectId' => 'your-project-id',
]);
$bucket = $storageClient->bucket('your-bucket-name');
$adapter = new GoogleStorageAdapter($storageClient, $bucket);

// uri defaults to "https://storage.googleapis.com"
$filesystem = new Filesystem($adapter);
$filesystem->getUrl('path/to/file.txt');
// "https://storage.googleapis.com/your-bucket-name/path/to/file.txt"

// set custom storage uri
$adapter->setStorageApiUri('http://example.com');
$filesystem = new Filesystem($adapter);
$filesystem->getUrl('path/to/file.txt');
// "http://example.com/path/to/file.txt"

// You can also prefix the file path if needed.
$adapter->setStorageApiUri('http://example.com');
$adapter->setPathPrefix('extra-folder/another-folder/');
$filesystem = new Filesystem($adapter);
$filesystem->getUrl('path/to/file.txt');
// "http://example.com/extra-folder/another-folder/path/to/file.txt"

flysystem-google-cloud-storage's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flysystem-google-cloud-storage's Issues

Unrecognized option "googlecloudstorage"

 [Symfony\Component\Config\Definition\Exception\InvalidConfigurationException]                      
  Unrecognized option "googlecloudstorage" under "oneup_flysystem.adapters.catalog_storage_adapter"  

Error moving file from one GCS bucket (in sub-folder) to another bucket (root folder)

I get the following error when trying to move a file from "my-bucket/sub-doler/file.txt" to "my-other-bucket/file.txt"

Fatal error: Nesting level too deep - recursive dependency? in C:\wamp\www\my-app\application\libraries\elfinder\elFinderVolumeDriver.class.php on line 1887

Update - I get the same error trying when no sub-folder is involved.

[RFC] Allow usage of google api client 2.0

It would be great if the consumer of the package could decide, which version of the Google API Client libraries he/she/it wants to use. The signatures of the relevant classes should be stable and didn't diverge.

We in particular are excited for new authentication methods for our API Client integration https://github.com/websightgmbh/l5-google-client

To allow graceful transition, I propose a new major version of your package (which would block composer from updating the api-client libraries from 1.x in older / existing projects). This new version could as well get the change from #4 , which may interfere with you current production environment.

I will pull-request the change to your package but leave the ticket here for discussion.

Thank you in advance!

Bump version

Could you please bump the version number, so we can get the latest changes without requiring dev-master?

Thank you :)

Caching?

Does this adapter work with any form of caching from Flysystem?

Changes to google api endpoint

I just received the following from google cloud:

Hello Google Cloud Storage Customer,

We are writing to let you know that starting June 20, 2019, Cloud Storage will allow 
JSON API requests to be sent to storage.googleapis.com in addition to 
www.googleapis.com (the current endpoint).

What do I need to know?
On June 20, 2019, we will begin updating the Cloud Client Libraries and gsutil to 
use the new endpoint; storage.googleapis.com. After the update, your JSON API 
requests will start using the new endpoint.

What do I need to do?
If your production or test code doesn't check for endpoint-specific details, no 
action is required on your part.

If your production or test code checks for endpoint-specific details, you will need to 
modify them before June 20, 2019 as follows:

  * If your code checks that the ‘baseUrl’ or ‘rootUrl’ fields in the JSON API 
    Discovery document point to www.googleapis.com, you will need to modify 
    those checks to allow either storage.googleapis.com or www.googleapis.com. 
    Note that the oauth2 scopes fields in the Discovery document will not change 
    and will continue to point to www.googleapis.com.
  * If your code checks that the ‘selfLink’ field in bucket or object metadata 
    points to www.googleapis.com, you will need to modify that check to allow
    either storage.googleapis.com or www.googleapis.com.
  * If you access Cloud Storage through a firewall, you will need to ensure that 
    requests to storage.googleapis.com are allowed by your firewall rules.

I would imagine the plugin will be affected by this?

Undefined Index contentType in normalizeObject function

I am running into an issue reading a directory of files from a google cloud storage bucket.

ErrorException in GoogleStorageAdapter.php line 193: Undefined index: contentType

/home/vagrant/Code/TestApplication/vendor/superbalist/flysystem-google-storage/src/GoogleStorageAdapter.php line 193

The error occurs in the normalizeObject function when I run where "testing" is my directory name:

$files = Storage::disk('gcs')->files('testing');
dd($files);  

And here is the dump of the object that causes the exception:

array:16 [▼
  "kind" => "storage#object"
  "id" => "testbucket/testing//1486146319340058"
  "selfLink" => "https://www.googleapis.com/storage/v1/b/testapplication/o/testing%2F"
  "name" => "testing/"
  "bucket" => "testbucket"
  "generation" => "1486146319340058"
  "metageneration" => "1"
  "timeCreated" => "2017-02-03T18:25:19.324Z"
  "updated" => "2017-02-03T18:25:19.324Z"
  "storageClass" => "STANDARD"
  "timeStorageClassUpdated" => "2017-02-03T18:25:19.324Z"
  "size" => "0"
  "md5Hash" => "1B2M2Y8AsgTpgAmY7PhCfg=="
  "mediaLink" => "https://www.googleapis.com/download/storage/v1/b/testapplication/o/testing%2F?generation=1486146319340058&alt=media"
  "crc32c" => "AAAAAA=="
  "etag" => "CJqky7vG9NECEAE="
]

If I mount the bucket on my local machine via the gcsfuse tool and use it as a local storage disk everything works fine and won't trigger this exception.

A possible quick fix is to check if contentType exists when setting the mimetype:
'mimetype' => (isset($info[''contentType]) ? $info['contentType'] : ''),

How to set readable

I have error:
RuntimeException: Cannot read from non-readable stream in D:\xampp\htdocs\bitbucket\classified\vendor\guzzlehttp\psr7\src\Stream.php:208

Plz suggest me solution. Is update key file?

StreamInterface vs resource.

In the tests, the stream is being explicitly tested being an instance of StreamInterface.

This stream is returned back further in Filesystem::readStream().

The problem here, however, is that the Filesystem interface states that the method should be returning a resource.

It's not a problem for me to check if I'm getting a StreamInterface or a resource, but this is probably a little bit misleading. I also realise that this is a backwards-compatibility breaking change, so that raises the stakes a bit. Or, maybe I'm wrong. In that case do not hesitate to call me a fool.

How to upload file (image) to Firebase ?

I have a problem: upload file (image) to Storage Firebase by PHP
Current, I don't find any function that it can resolve for this problem.
Pls, support help me.
Thanks!

SSL issues

Whilst developing locally, using v5.0.0 of this library, I am getting a cURL error about the SSL certificate.

How can I disable SSL checks for this library?

Or, if I use cacert.pem, where do I need to put that to get it to work?

PHP Warning thrown after asset Uploaded - stream provided gets corrupted

Currently I have a site that uploads a file stored in the temp directory of the server. This is a path like - "/private/var/tmp/phprROvf6"

Then we open that path, and then try to upload it GC. Example

        $handle = fopen($path, 'r');
        if ($handle === false) {
            throw new InvalidArgumentException("$path could not be opened for reading");
        }
        $result = $filesystem->putStream($fileID, $handle);

        fclose($handle);

        return $result;

The issue is whenever it tries to close the $handle variable after uploading it states it is no longer a stream and errors out. I can confirm that the handle is a stream originally, that the file does upload correctly and the result returned says true, but after it gets uploaded - https://github.com/Superbalist/flysystem-google-cloud-storage/blob/master/src/GoogleStorageAdapter.php line 174 (protected function upload($path, $contents, Config $config)

   $object = $this->bucket->upload($contents, $options);

the stream then becomes corrupt.

var_dump($handle) before it tries to upload -

    resource(13) of type (stream)

var_dump($handle) after the stream is uploaded -

   resource(13) of type (Unknown)

Any idea why this would happen or how to prevent it.

Generated URLs are not RFC 3986 compliant

Generating an uri with whitespaces or any other characters besides alphanumeric characters or "-_.~" will not be RFC 3986 compliant. They will work in most browsers because they are automatically encoded but may fail when passed to other software modules which are not doing automatic encoding.

Pull request #90 will fix this bug. For anyone who has the same problem at the moment the bug can be solved by pointing at our bugfix:

{
    "require": {
        "superbalist/flysystem-google-storage": "dev-bugfix-rfc3986-urls"
    },
    "repositories": [
        {
            "type": "vcs",
            "url":  "https://github.com/mailspice/flysystem-google-cloud-storage"
        }
    ]
}

Is there a reason to explicitly set the visibility if not passed?

In the following lines, we check if the visibility has been specified, and if it hasn't we default to private. https://github.com/Superbalist/flysystem-google-cloud-storage/blob/master/src/GoogleStorageAdapter.php#L143-L149

Is there any reason we need to default to anything at all? If we specify nothing, the default object ACL of the bucket should be applied (by default private, visible to owner). This seems like more desirable behaviour imo – if users specify a default object ACL on their buckets, I believe it's expected that it will apply to uploaded files.

If there is a good reason this is being done, let's keep it but make it clear in the documentation that we're applying private visibility by default.

Directories being normalised as files

I think I found a big problem with the GCS adapter.

I am having issues where directories are not being listed in elFinder, when they do exist in the GCS console.

I set up some break points, namely on listContents() and it seems that directories are being normalised (with the normaliseObject method) as files. I think this is stopping them being shown in elFinder.

Google Cloud - Connection example

Could someone provide examples on how to use the new GCloud API?

I've upgraded from v1 to v3 (this library) and I can no longer use Google_Auth_AssertionCredentials as the Google API library was changed to GCloud.

GCS adapter suddenly stopped working

Last night my GCS root volume worked fine, this morning, it has started acting "odd".

The root volume now thinks it is locked, but I have no access controls on it at all.

I put a break point on the code where I initialise the adapter, and it looks like it has stopped authorizing it for some reason?

Code:

function googleCloud($bucket)
    {
        $credentials = new \Google_Auth_AssertionCredentials(
            '[email protected]',
            [\Google_Service_Storage::DEVSTORAGE_FULL_CONTROL],
            file_get_contents(set_realpath('my-project.p12')),
            'notasecret'
        );

        $client = new \Google_Client();
        $client->setAssertionCredentials($credentials);
        $client->setDeveloperKey('MY_KEY');

        $service = new \Google_Service_Storage($client);

        $adapter = new GoogleStorageAdapter($service, $bucket);

        return $adapter;
    }

Cannot edit new text file in GCS Bucket

It seems I am unable to edit text files inside a GCS bucket.

I tried both files created with elFinder and files uploaded directly in the Google Console.

Oddly, I can edit CSS files and I can even use the image resizer/cropper/rotator.

Change to semver for versioning

Thanks very much for this great package. Would you consider moving to semver-compatible versioning? I recently had to submit a change to the Drupal module using this package to pull in 7.0.0, however it doesn't appear there are any real breaking changes from 5.x, which is what it had been on. Using semantic versioning would make it easier for packages depending on this one to identify breaking changes and keep API compatibility for more minor changes. It appears every release at the moment bumps the major version.

Moving files in GCS bucket doesn't move, just renames file with "folder_name\..."

So, big problem with moving a file inside a GCS bucket.

I have a file "text.css" in the root of a bucket, alongside some folders. I drag-n-dropped the CSS file into a folder. elFinder does it's thing and seems to have moved the file. I navigate into a folder (CSS), and yup, there is the file.

However when I head to my Google console and check the bucket, the test.css file is still in the root of the bucket, but has been renamed "css\test.css".

I tried with an image file, moving to another fodler, same thing. It simply changes the name of the file.

Creating a new file in the folder also results in a file being created in the root of the bucket, with the file name prepended with the folder name.

Screenshot of the bucket in my Google Console http://imgur.com/wck5YlA

Allow metadata key in getOptionsFromConfig

Hello there,
The getOptionsFromConfig method should accept the metadata key as a possible value.

I require this for the following properties:

"contentType": string
"contentLanguage": string
"cacheControl": string
The official google cloud documentation shows that the StorageObject allows the metadata property.
https://googlecloudplatform.github.io/google-cloud-php/#/docs/v0.20.1/storage/storageobject?method=update

And the JSON API docs shows what properties the StorageObject can contain.
https://cloud.google.com/storage/docs/json_api/v1/objects#resource

Are you going to support getUrl() ?

Are you going to support getUrl()? It is used by Laravel and Spark, and a change to illuminate/filesystem allows the driver to produce an URL for files now. It looks like it is going to be released with the next Laravel release.

Source

Massive uploads, memory issues.

I am looking to do some huge uploads (using elFidner and Google Cloud Storage) and it looks like I am running out of memory when doing so.

elFinder's debug information shows that it's using a peak memory of 5mb.

The upload is chunked into 10mb segments as well.

I'm wondering if there is anything that can be done?

Testing uploading to a local file system, everything is OK, and a file that is under 2gb (the current memory limit of PHP) also works, but I'm wondering if there is a way to have the upload streamed as bits, rather then the whole file?

Cannot move folders, file not found error

When trying to move folders, I get a File Not Found error.

I also find that my breakpoint on the copy function (line 173, GoogleStorageAdapter.php) is not being hit when moving folders, but is when moving files?

Is moving (copy/cut and pasting effectively) folders simply not supported/working?

Resumable upload?

Does this library support resumable uploads?

If not, is it something you can implement as a new feature?

Propagate config options to google bucket upload

Right now you can pass an array containing upload config to writeStream, updateStream etc. after retrieving the filesystem driver via $disk->getDriver() which will essentially pass the arguments to
protected function upload($path, $contents, Config $config)
where the actual config of the google bucket upload happens.

I noticed, that the config arguments are prepared by another function:

protected function getOptionsFromConfig(Config $config)
{
	$options = [];

	if ($visibility = $config->get('visibility')) {
		$options['predefinedAcl'] = $this->getPredefinedAclForVisibility($visibility);
	} else {
		// if a file is created without an acl, it isn't accessible via the console
		// we therefore default to private
		$options['predefinedAcl'] = $this->getPredefinedAclForVisibility(AdapterInterface::VISIBILITY_PRIVATE);
	}

	return $options;
})

The problem I have is, that only the 'visibility' argument from $config is actually parsed, additional arguments like 'chunkSize' or 'resumable' that are essential for chunked file uploads are discarded.

I would suggest adding something along the line of this, to enable those features:

if ($config->has('chunkSize')) {
	$options['chunkSize'] = $config->get('chunkSize');
}

if ($config->has('resumable')) {
	$options['resumable'] = $config->get('resumable');
}

How to delete a dir with millions of files?

I think the code just hangs when you try to delete a massive directory, or list a massive directory.

Anyways to limit the number of rows returned? Do it in small chunks?

new release.

As the title says its time for a new release.
we are testing against those versions. but we cannot use them.

PHP warnings while using via FlysystemStreamWrapper

I'm not sure if this is problem with FlysystemStreamWrapper or this library, but using the similar code with AWS S3 adapter works fine. If problem is with FlysystemStreamWrapper I can open separate issue there.

Problem is that when I register GoogleCloudStorageAdapter as stream wrapper and write to file using file_put_contents file is uploaded fine but I get PHP warnings.

PHP version:

> php -v
PHP 7.1.13 (cli) (built: Jan  5 2018 15:31:15) ( NTS )
Copyright (c) 1997-2017 The PHP Group
Zend Engine v3.1.0, Copyright (c) 1998-2017 Zend Technologies

composer.json

{
    "require": {
        "league/flysystem": "dev-master",
        "twistor/flysystem-stream-wrapper": "dev-master",
        "superbalist/flysystem-google-storage": "dev-master"
    }
}

test.php

<?php
use League\Flysystem\Filesystem;
use Twistor\FlysystemStreamWrapper;
use Google\Cloud\Storage\StorageClient;
use Superbalist\Flysystem\GoogleStorage\GoogleStorageAdapter;
require __DIR__ . '/vendor/autoload.php';

$keyFilePath = '/Users/eero/flystream/google-application-credentials.json';
$bucketName = 'yourbucketname';
$projectId = 'yourgoogleprojectid';
$basePath = 'sandbox';

$clientConfig = [
	'projectId'  => $projectId,
	'keyFilePath' => $keyFilePath
];

$client = new StorageClient($clientConfig);
$bucket = $client->bucket($bucketName);
$adapter = new GoogleStorageAdapter($client, $bucket, $basePath);

$filesystem = new Filesystem($adapter);
FlysystemStreamWrapper::register('eerotest', $filesystem);
$targetPath = 'eerotest://testfile';
file_put_contents($targetPath, 'foo');

Now when I run php test.php I would expect no errors, but what I actually get:

PHP Warning:  fseek(): supplied resource is not a valid stream resource in /Users/eero/flystream/vendor/twistor/flysystem-stream-wrapper/src/FlysystemStreamWrapper.php on line 421

Warning: fseek(): supplied resource is not a valid stream resource in /Users/eero/flystream/vendor/twistor/flysystem-stream-wrapper/src/FlysystemStreamWrapper.php on line 421
PHP Warning:  fclose(): supplied resource is not a valid stream resource in /Users/eero/flystream/vendor/twistor/flysystem-stream-wrapper/src/FlysystemStreamWrapper.php on line 387

Warning: fclose(): supplied resource is not a valid stream resource in /Users/eero/flystream/vendor/twistor/flysystem-stream-wrapper/src/FlysystemStreamWrapper.php on line 387

Possible upload issue with elFinder and v3

I'm testing out v3 of this library as I need to upload huge files, and the chnage to a streaming method should help.

However, now when I upload a large file (for now, just 100-200mb) elFinder is behaving odd, and I think it is linked to this library maybe?

Previously, the upload progress was accurate, and there would be lots of small chunked requests.

Now, there are a few small XHR requests (depending on the chunk size set in elFinder) which fills the progress bas to 100%, then there is a final request (elFinder display "Doing something") that takes X amount of time and does the full upload of the file.

This is giving me issues with timeouts if the file isn't written in enough time. I was under the impression the streaming method you implemented would solve timeout issues?

Stream uploads?

Doing tests with huge file uploads (10gb+) it seems that this adapter does not stream the upload, rather it just writes the file, presumably to memory, then finally to the bucket.

Is there any chance of getting the read and write stream functionality added in?

Error creating new directories

Using elFinder to create a new directory in the root of a bucket, an error is thrown.

Whilst elFinder shows an error, the directory is actually created in the GCS (checked using the GCS console).

I think the problem is with this adapter though.

In the upload() method, uploadType is always set as media which may cause a conflict with elFinder.

It also seems to always want to set data to $contents, which is passed an empty string via createDir() which again is likely to cause issues?

GoogleStorageAdapter::deleteDir fails to delete a directory

I'm unable to delete a directory on Cloud Storage bucket since the update to 7.2.0 My application worked fine with v7.1.0
I believe this was introduced by PR#94.
To illustrate the issue, please run the following snippet after doing composer require superbalist/flysystem-google-cloud-storage.

require_once __DIR__ . '/vendor/autoload.php';

use Google\Cloud\Storage\StorageClient;
use League\Flysystem\Filesystem;
use Superbalist\Flysystem\GoogleStorage\GoogleStorageAdapter;

$storageClient = new StorageClient([
    'projectId' => '<valid-project-id>',
    'keyFilePath' => '</path/to/service/account/file.json>',
]);
$bucket = $storageClient->bucket('<bucket-name>');
$adapter = new GoogleStorageAdapter($storageClient, $bucket);
$filesystem = new Filesystem($adapter);
$filesystem->createDir('test');
$filesystem->put('test/file.txt', 'contents');

// The above works fine. but here this call deletes the file, but fails to delete
// directory `test` from the bucket.
$filesystem->deleteDir('test');

I inspected the changes to ::deleteDir method from said PR and I think the issue lies at line264. Because at that point $object['path'] for the case where $object['type'] === 'dir' isn't normalised, while $dirname is, therefore the directory we request to delete is never added to $filtered_objects.
To verify, I edited that loop to look like:

$filtered_objects = [];
foreach ($objects as $object) {
    if ($object['type'] === 'dir') { // normalise path for directories
        $object['path'] = $this->normaliseDirName($object['path']);
    }
    if (strpos($object['path'], $dirname) !== false) {
        $filtered_objects[] = $object;
    }
}

This seems to work for me. I have now locked this dependency at v7.1.0 to continue using it.
Thank you.

Duplicate sub folders being shown

I have a bucket with some nest folders and I am getting duplication for some reason eg:

  • Main Bucket
  • - Top Folder
  • - - Sub Folder A
  • - - - Another folder
  • - - - Another folder < -- duplicate of above
  • - - Sub Folder B
  • - - - etc

Uncaught Error: Class 'Google_Auth_AssertionCredentials' not found

After composer updating, I now get the error:
PHP Fatal error: Uncaught Error: Class 'Google_Auth_AssertionCredentials' not found

This was introduced in #5 when we allowed for both ~1.1|^2.0.0@RC of the google/apiclient to be installed. At some point during RC and 2.0.0 final, the Google_Auth_AssertionCredentials class was removed.
See googleapis/google-api-php-client#748 and https://github.com/google/google-api-php-client/blob/master/UPGRADING.md#google_auth_assertioncredentials-has-been-removed
When we pushed this release allowing 2.0.0@RC, we incorrectly bumped the minor version and kept major as is.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.