Giter Club home page Giter Club logo

java-storage's Introduction

Google Cloud Storage Client for Java

Java idiomatic client for Cloud Storage.

Maven Stability

Quickstart

If you are using Maven with BOM, add this to your pom.xml file:

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>com.google.cloud</groupId>
      <artifactId>libraries-bom</artifactId>
      <version>26.41.0</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>

<dependencies>
  <dependency>
    <groupId>com.google.cloud</groupId>
    <artifactId>google-cloud-storage</artifactId>
  </dependency>
  <dependency>
    <groupId>com.google.cloud</groupId>
    <artifactId>google-cloud-storage-control</artifactId>
  </dependency>

If you are using Maven without the BOM, add this to your dependencies:

<dependency>
  <groupId>com.google.cloud</groupId>
  <artifactId>google-cloud-storage</artifactId>
  <version>2.40.0</version>
</dependency>
<dependency>
  <groupId>com.google.cloud</groupId>
  <artifactId>google-cloud-storage-control</artifactId>
  <version>2.40.1-SNAPSHOT</version><!-- {x-version-update:google-cloud-storage:current} -->
</dependency>

If you are using Gradle 5.x or later, add this to your dependencies:

implementation platform('com.google.cloud:libraries-bom:26.41.0')

implementation 'com.google.cloud:google-cloud-storage'

If you are using Gradle without BOM, add this to your dependencies:

implementation 'com.google.cloud:google-cloud-storage:2.40.0'

If you are using SBT, add this to your dependencies:

libraryDependencies += "com.google.cloud" % "google-cloud-storage" % "2.40.0"

Authentication

See the Authentication section in the base directory's README.

Authorization

The client application making API calls must be granted authorization scopes required for the desired Cloud Storage APIs, and the authenticated principal must have the IAM role(s) required to access GCP resources using the Cloud Storage API calls.

Getting Started

Prerequisites

You will need a Google Cloud Platform Console project with the Cloud Storage API enabled. You will need to enable billing to use Google Cloud Storage. Follow these instructions to get your project set up. You will also need to set up the local development environment by installing the Google Cloud Command Line Interface and running the following commands in command line: gcloud auth login and gcloud config set project [YOUR PROJECT ID].

Installation and setup

You'll need to obtain the google-cloud-storage library. See the Quickstart section to add google-cloud-storage as a dependency in your code.

About Cloud Storage

Cloud Storage is a durable and highly available object storage service. Google Cloud Storage is almost infinitely scalable and guarantees consistency: when a write succeeds, the latest copy of the object will be returned to any GET, globally.

See the Cloud Storage client library docs to learn how to use this Cloud Storage Client Library.

About Storage Control

The Storage Control API lets you perform metadata-specific, control plane, and long-running operations.

The Storage Control API creates one space to perform metadata-specific, control plane, and long-running operations apart from the Storage API. Separating these operations from the Storage API improves API standardization and lets you run faster releases.

If you are using Maven with BOM, add this to your pom.xml file:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>com.google.cloud</groupId>
            <artifactId>libraries-bom</artifactId>
            <version>26.37.0</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

<dependencies>
    <dependency>
    <groupId>com.google.cloud</groupId>
    <artifactId>google-cloud-storage-control</artifactId>
</dependency>

If you are using Maven without the BOM, add this to your dependencies:

<dependency>
    <groupId>com.google.cloud</groupId>
    <artifactId>google-cloud-storage-control</artifactId>
    <version>2.40.1-SNAPSHOT</version><!-- {x-version-update:google-cloud-storage-control:current} -->
</dependency>

If you are using Gradle 5.x or later, add this to your dependencies:

implementation platform('com.google.cloud:libraries-bom:2.40.0')
implementation 'com.google.cloud:google-cloud-storage-control'

If you are using Gradle without BOM, add this to your dependencies:

implementation 'com.google.cloud:google-cloud-storage-control:2.40.1-SNAPSHOT' <!-- {x-version-update:google-cloud-storage-control:current} -->

Creating an authorized service object

To make authenticated requests to Google Cloud Storage, you must create a service object with credentials. You can then make API calls by calling methods on the Storage service object. The simplest way to authenticate is to use Application Default Credentials. These credentials are automatically inferred from your environment, so you only need the following code to create your service object:

import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;

Storage storage = StorageOptions.getDefaultInstance().getService();

For other authentication options, see the Authentication page in Google Cloud Java.

Storing data

Stored objects are called "blobs" in google-cloud and are organized into containers called "buckets". Blob, a subclass of BlobInfo, adds a layer of service-related functionality over BlobInfo. Similarly, Bucket adds a layer of service-related functionality over BucketInfo. In this code snippet, we will create a new bucket and upload a blob to that bucket.

Add the following imports at the top of your file:

import static java.nio.charset.StandardCharsets.UTF_8;

import com.google.cloud.storage.Blob;
import com.google.cloud.storage.Bucket;
import com.google.cloud.storage.BucketInfo;

Then add the following code to create a bucket and upload a simple blob.

Important: Bucket names have to be globally unique (among all users of Cloud Storage). If you choose a bucket name that already exists, you'll get a helpful error message telling you to choose another name. In the code below, replace "my_unique_bucket" with a unique bucket name. See more about naming rules here.

// Create a bucket
String bucketName = "my_unique_bucket"; // Change this to something unique
Bucket bucket = storage.create(BucketInfo.of(bucketName));

// Upload a blob to the newly created bucket
BlobId blobId = BlobId.of(bucketName, "my_blob_name");
BlobInfo blobInfo = BlobInfo.newBuilder(blobId).setContentType("text/plain").build();
Blob blob = storage.create(blobInfo, "a simple blob".getBytes(UTF_8));

A complete example for creating a blob can be found at UploadObject.java.

At this point, you will be able to see your newly created bucket and blob on the Google Developers Console.

Retrieving data

Now that we have content uploaded to the server, we can see how to read data from the server. Add the following line to your program to get back the blob we uploaded.

BlobId blobId = BlobId.of(bucketName, "my_blob_name");
byte[] content = storage.readAllBytes(blobId);
String contentString = new String(content, UTF_8);

A complete example for accessing blobs can be found at DownloadObject.java.

Updating data

Another thing we may want to do is update a blob. The following snippet shows how to update a Storage blob if it exists.

BlobId blobId = BlobId.of(bucketName, "my_blob_name");
Blob blob = storage.get(blobId);
if (blob != null) {
  byte[] prevContent = blob.getContent();
  System.out.println(new String(prevContent, UTF_8));
  WritableByteChannel channel = blob.writer();
  channel.write(ByteBuffer.wrap("Updated content".getBytes(UTF_8)));
  channel.close();
}

Listing buckets and contents of buckets

Suppose that you've added more buckets and blobs, and now you want to see the names of your buckets and the contents of each one. Add the following code to list all your buckets and all the blobs inside each bucket.

// List all your buckets
System.out.println("My buckets:");
for (Bucket bucket : storage.list().iterateAll()) {
  System.out.println(bucket);

  // List all blobs in the bucket
  System.out.println("Blobs in the bucket:");
  for (Blob blob : bucket.list().iterateAll()) {
    System.out.println(blob);
  }
}

Complete source code

See ListObjects.java for a complete example.

Example Applications

  • Bookshelf - An App Engine application that manages a virtual bookshelf.
    • This app uses google-cloud to interface with Cloud Datastore and Cloud Storage. It also uses Cloud SQL, another Google Cloud Platform service.
  • Flexible Environment/Storage example - An app that uploads files to a public Cloud Storage bucket on the App Engine Flexible Environment runtime.

Samples

Samples are in the samples/ directory.

Sample Source Code Try it
Native Image Storage Sample source code Open in Cloud Shell
Configure Retries source code Open in Cloud Shell
Generate Signed Post Policy V4 source code Open in Cloud Shell
Get Service Account source code Open in Cloud Shell
Quickstart Grpc Dp Sample source code Open in Cloud Shell
Quickstart Grpc Sample source code Open in Cloud Shell
Quickstart Sample source code Open in Cloud Shell
Quickstart Storage Control Sample source code Open in Cloud Shell
Add Bucket Default Owner source code Open in Cloud Shell
Add Bucket Iam Conditional Binding source code Open in Cloud Shell
Add Bucket Iam Member source code Open in Cloud Shell
Add Bucket Label source code Open in Cloud Shell
Add Bucket Owner source code Open in Cloud Shell
Change Default Storage Class source code Open in Cloud Shell
Configure Bucket Cors source code Open in Cloud Shell
Create Bucket source code Open in Cloud Shell
Create Bucket Dual Region source code Open in Cloud Shell
Create Bucket Pub Sub Notification source code Open in Cloud Shell
Create Bucket With Object Retention source code Open in Cloud Shell
Create Bucket With Storage Class And Location source code Open in Cloud Shell
Create Bucket With Turbo Replication source code Open in Cloud Shell
Delete Bucket source code Open in Cloud Shell
Delete Bucket Pub Sub Notification source code Open in Cloud Shell
Disable Bucket Versioning source code Open in Cloud Shell
Disable Default Event Based Hold source code Open in Cloud Shell
Disable Lifecycle Management source code Open in Cloud Shell
Disable Requester Pays source code Open in Cloud Shell
Disable Uniform Bucket Level Access source code Open in Cloud Shell
Enable Bucket Versioning source code Open in Cloud Shell
Enable Default Event Based Hold source code Open in Cloud Shell
Enable Lifecycle Management source code Open in Cloud Shell
Enable Requester Pays source code Open in Cloud Shell
Enable Uniform Bucket Level Access source code Open in Cloud Shell
Get Bucket Autoclass source code Open in Cloud Shell
Get Bucket Metadata source code Open in Cloud Shell
Get Bucket Rpo source code Open in Cloud Shell
Get Default Event Based Hold source code Open in Cloud Shell
Get Public Access Prevention source code Open in Cloud Shell
Get Requester Pays Status source code Open in Cloud Shell
Get Retention Policy source code Open in Cloud Shell
Get Uniform Bucket Level Access source code Open in Cloud Shell
List Bucket Iam Members source code Open in Cloud Shell
List Buckets source code Open in Cloud Shell
List Pub Sub Notifications source code Open in Cloud Shell
Lock Retention Policy source code Open in Cloud Shell
Make Bucket Public source code Open in Cloud Shell
Print Bucket Acl source code Open in Cloud Shell
Print Bucket Acl Filter By User source code Open in Cloud Shell
Print Pub Sub Notification source code Open in Cloud Shell
Remove Bucket Cors source code Open in Cloud Shell
Remove Bucket Default Kms Key source code Open in Cloud Shell
Remove Bucket Default Owner source code Open in Cloud Shell
Remove Bucket Iam Conditional Binding source code Open in Cloud Shell
Remove Bucket Iam Member source code Open in Cloud Shell
Remove Bucket Label source code Open in Cloud Shell
Remove Bucket Owner source code Open in Cloud Shell
Remove Retention Policy source code Open in Cloud Shell
Set Async Turbo Rpo source code Open in Cloud Shell
Set Bucket Autoclass source code Open in Cloud Shell
Set Bucket Default Kms Key source code Open in Cloud Shell
Set Bucket Website Info source code Open in Cloud Shell
Set Client Endpoint source code Open in Cloud Shell
Set Default Rpo source code Open in Cloud Shell
Set Public Access Prevention Enforced source code Open in Cloud Shell
Set Public Access Prevention Inherited source code Open in Cloud Shell
Set Retention Policy source code Open in Cloud Shell
Create Folder source code Open in Cloud Shell
Create Hierarchical Namespace Bucket source code Open in Cloud Shell
Delete Folder source code Open in Cloud Shell
Get Folder source code Open in Cloud Shell
List Folders source code Open in Cloud Shell
Rename Folder source code Open in Cloud Shell
Activate Hmac Key source code Open in Cloud Shell
Create Hmac Key source code Open in Cloud Shell
Deactivate Hmac Key source code Open in Cloud Shell
Delete Hmac Key source code Open in Cloud Shell
Get Hmac Key source code Open in Cloud Shell
List Hmac Keys source code Open in Cloud Shell
Create Managed Folder source code Open in Cloud Shell
Delete Managed Folder source code Open in Cloud Shell
Get Managed Folder source code Open in Cloud Shell
List Managed Folders source code Open in Cloud Shell
Add File Owner source code Open in Cloud Shell
Batch Set Object Metadata source code Open in Cloud Shell
Change Object Csek To Kms source code Open in Cloud Shell
Change Object Storage Class source code Open in Cloud Shell
Compose Object source code Open in Cloud Shell
Copy Object source code Open in Cloud Shell
Copy Old Version Of Object source code Open in Cloud Shell
Delete Object source code Open in Cloud Shell
Delete Old Version Of Object source code Open in Cloud Shell
Download Byte Range source code Open in Cloud Shell
Download Encrypted Object source code Open in Cloud Shell
Download Object source code Open in Cloud Shell
Download Object Into Memory source code Open in Cloud Shell
Download Public Object source code Open in Cloud Shell
Download Requester Pays Object source code Open in Cloud Shell
Generate Encryption Key source code Open in Cloud Shell
Generate V4 Get Object Signed Url source code Open in Cloud Shell
Generate V4 Put Object Signed Url source code Open in Cloud Shell
Get Object Metadata source code Open in Cloud Shell
List Objects source code Open in Cloud Shell
List Objects With Old Versions source code Open in Cloud Shell
List Objects With Prefix source code Open in Cloud Shell
Make Object Public source code Open in Cloud Shell
Move Object source code Open in Cloud Shell
Print File Acl source code Open in Cloud Shell
Print File Acl For User source code Open in Cloud Shell
Release Event Based Hold source code Open in Cloud Shell
Release Temporary Hold source code Open in Cloud Shell
Remove File Owner source code Open in Cloud Shell
Rotate Object Encryption Key source code Open in Cloud Shell
Set Event Based Hold source code Open in Cloud Shell
Set Object Metadata source code Open in Cloud Shell
Set Object Retention Policy source code Open in Cloud Shell
Set Temporary Hold source code Open in Cloud Shell
Stream Object Download source code Open in Cloud Shell
Stream Object Upload source code Open in Cloud Shell
Upload Encrypted Object source code Open in Cloud Shell
Upload Kms Encrypted Object source code Open in Cloud Shell
Upload Object source code Open in Cloud Shell
Upload Object From Memory source code Open in Cloud Shell
Allow Divide And Conquer Download source code Open in Cloud Shell
Allow Parallel Composite Upload source code Open in Cloud Shell
Download Bucket source code Open in Cloud Shell
Download Many source code Open in Cloud Shell
Upload Directory source code Open in Cloud Shell
Upload Many source code Open in Cloud Shell

Troubleshooting

To get help, follow the instructions in the shared Troubleshooting document.

Supported Java Versions

Java 8 or above is required for using this client.

Google's Java client libraries, Google Cloud Client Libraries and Google Cloud API Libraries, follow the Oracle Java SE support roadmap (see the Oracle Java SE Product Releases section).

For new development

In general, new feature development occurs with support for the lowest Java LTS version covered by Oracle's Premier Support (which typically lasts 5 years from initial General Availability). If the minimum required JVM for a given library is changed, it is accompanied by a semver major release.

Java 11 and (in September 2021) Java 17 are the best choices for new development.

Keeping production systems current

Google tests its client libraries with all current LTS versions covered by Oracle's Extended Support (which typically lasts 8 years from initial General Availability).

Legacy support

Google's client libraries support legacy versions of Java runtimes with long term stable libraries that don't receive feature updates on a best efforts basis as it may not be possible to backport all patches.

Google provides updates on a best efforts basis to apps that continue to use Java 7, though apps might need to upgrade to current versions of the library that supports their JVM.

Where to find specific information

The latest versions and the supported Java versions are identified on the individual GitHub repository github.com/GoogleAPIs/java-SERVICENAME and on google-cloud-java.

Versioning

This library follows Semantic Versioning, but does update Storage interface to introduce new methods which can break your implementations if you implement this interface for testing purposes.

Contributing

Contributions to this library are always welcome and highly encouraged.

See CONTRIBUTING for more information how to get started.

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. See Code of Conduct for more information.

License

Apache 2.0 - See LICENSE for more information.

CI Status

Java Version Status
Java 8 Kokoro CI
Java 8 OSX Kokoro CI
Java 8 Windows Kokoro CI
Java 11 Kokoro CI

Java is a registered trademark of Oracle and/or its affiliates.

java-storage's People

Contributors

ajaaym avatar andreamlin avatar andrey-qlogic avatar athakor avatar benwhitehead avatar chingor13 avatar danielduhh avatar dmitry-fa avatar elharo avatar frankyn avatar garrettjonesgoogle avatar gcf-owl-bot[bot] avatar houglum avatar jesselovelace avatar kolea2 avatar mpeddada1 avatar neenu1995 avatar neozwu avatar pongad avatar release-please[bot] avatar renovate-bot avatar sduskis avatar stephaniewang526 avatar suraj-qlogic avatar suztomo avatar sydney-munro avatar tswast avatar vam-google avatar yihanzhen avatar yoshi-automation avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

java-storage's Issues

[Storage] add download directory ability to the storage client

Currently it is only possible to download or getContent from one blob at a time, which makes for slow processing options for directories with 1k+ files (regardless the size).
gsutil offers -m flag which performs actions in "multi-threaded/multi-processing" context.
would it be possible to add such ability to the storage client?

an example for implementation would be AWS's TransferManager: downloadDirectory(String bucketName, String keyPrefix, File destinationDirectory).

Tracking bug: com.google.cloud.storage.StorageException: Connection closed prematurely:..

Using a ReadChannel, if a premature connection closure occurs, the client isn't able to recover when it should be able to. The client at that moment has a specific range context which makes it retryable.

Additionally, the implementation is too reliant on etag1 not changing when generation number can do the same thing. This also needs to be addressed.

Example failure:

CONFIG: -------------- REQUEST  --------------
GET http://localhost:8080/download/storage/v1/b/test-bucket/o/test-create-blob?alt=media
Accept-Encoding: gzip
Authorization: <Not Logged>
Range: bytes=0-10485759
User-Agent: gcloud-java/ Google-API-Java-Client/1.30.9 Google-HTTP-Java-Client/1.34.2 (gzip)
x-goog-api-client: gl-java/1.8.0 gdcl/1.30.9 linux/5.2.17
x-goog-testbench-instructions: return-broken-stream

Mar 06, 2020 8:47:01 AM com.google.api.client.http.HttpRequest execute
CONFIG: curl -v --compressed -H 'Accept-Encoding: gzip' -H 'Authorization: <Not Logged>' -H 'Range: bytes=0-10485759' -H 'User-Agent: gcloud-java/ Google-API-Java-Client/1.30.9 Google-HTTP-Java-Client/1.34.2 (gzip)' -H 'x-goog-api-client: gl-java/1.8.0 gdcl/1.30.9 linux/5.2.17' -H 'x-goog-testbench-instructions: return-broken-stream' -- 'http://localhost:8080/download/storage/v1/b/test-bucket/o/test-create-blob?alt=media'
Mar 06, 2020 8:47:01 AM com.google.api.client.http.HttpResponse <init>
CONFIG: -------------- RESPONSE --------------
HTTP/1.0 200 OK
Server: Werkzeug/1.0.0 Python/3.7.0
Content-Range: bytes 0-10485758/10485760
x-goog-generation: 1
Content-Length: 10485760
Date: Fri, 06 Mar 2020 16:47:01 GMT
Content-Type: text/html; charset=utf-8
x-goog-hash: md5=8clkXbwU793H2KMiaF8m6w==,crc32c=CPJwmw==

Mar 06, 2020 8:47:03 AM com.google.api.client.util.LoggingByteArrayOutputStream close
CONFIG: Total: 1,114,112 bytes (logging first 16,384 bytes)
Mar 06, 2020 8:47:03 AM com.google.api.client.util.LoggingByteArrayOutputStream close
CONFIG:                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 



com.google.cloud.storage.StorageException: Connection closed prematurely: bytesRead = 1114112, Content-Length = 10485760

	at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:230)
	at com.google.cloud.storage.spi.v1.HttpStorageRpc.read(HttpStorageRpc.java:711)
	at com.google.cloud.storage.BlobReadChannel$1.call(BlobReadChannel.java:129)
	at com.google.cloud.storage.BlobReadChannel$1.call(BlobReadChannel.java:125)
	at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
	at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
	at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
	at com.google.cloud.storage.BlobReadChannel.read(BlobReadChannel.java:124)
	at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:65)
	at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:109)
	at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
	at java.io.FilterInputStream.read(FilterInputStream.java:107)
	at com.google.cloud.storage.it.ITStorageTest.testFailure(ITStorageTest.java:359)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
	at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: java.io.IOException: Connection closed prematurely: bytesRead = 1114112, Content-Length = 10485760
	at com.google.api.client.http.javanet.NetHttpResponse$SizeValidatingInputStream.throwIfFalseEOF(NetHttpResponse.java:204)
	at com.google.api.client.http.javanet.NetHttpResponse$SizeValidatingInputStream.read(NetHttpResponse.java:166)
	at java.io.FilterInputStream.read(FilterInputStream.java:133)
	at com.google.api.client.util.LoggingInputStream.read(LoggingInputStream.java:57)
	at java.io.FilterInputStream.read(FilterInputStream.java:107)
	at com.google.api.client.util.ByteStreams.copy(ByteStreams.java:49)
	at com.google.api.client.util.IOUtils.copy(IOUtils.java:87)
	at com.google.api.client.util.IOUtils.copy(IOUtils.java:59)
	at com.google.api.client.http.HttpResponse.download(HttpResponse.java:410)
	at com.google.cloud.storage.spi.v1.HttpStorageRpc.read(HttpStorageRpc.java:706)
	... 40 more

Using C++ testbench2 to reproduce issue but requires tweaks3 to work with the Java library.

Google Cloud Storage - java sdk - retrySettings does not recover network errors

I'm trying to configure my 'google cloud storage' client to use internal retries.
I added configuration for retrySettings like this:

    RetrySettings retrySettings = RetrySettings
            .newBuilder()
            .setMaxAttempts(50)
            .setInitialRetryDelay(Duration.ofSeconds(10))
            .setRetryDelayMultiplier(1.5)
            .setMaxRetryDelay(Duration.ofSeconds(20))
            .build();
    storage = StorageOptions.newBuilder().setRetrySettings(retrySettings).setCredentials(credentials).build().getService();
....

I then tried to upload a 500MB file, and during the upload process, disconnect the network.
I expected the upload to be retried for at least 10 minutes, however I got an error almost immediately.

I have some questions:

  1. Am I missing something? Does the code above look OK?
  2. Is there a way to turn on debug information for the java client, so I can see at log file what occur?
  3. I did not find an appropriate java tutorials for google cloud storage java SDK, only some code samples. Is there a good tutorial for the java SDK?

Documentation for uploading big files to GC Storage

Method 'create' of Storage interface which uses InputStream is marked as @deprecated with notice:

For large content, {@link #writer} is recommended as it uses resumable upload. By default any md5 and crc32c values in the given {@code blobInfo} are ignored unless requested via the {@code BlobWriteOption.md5Match} and {@code BlobWriteOption.crc32cMatch} options. The given input stream is closed upon success.

https://github.com/googleapis/google-cloud-java/blob/master/google-cloud-clients/google-cloud-storage/src/main/java/com/google/cloud/storage/Storage.java

Recently in this project was an example of uploading big files into storage, for now, this code was removed:

    private void run(Storage storage, Path uploadFrom, BlobInfo blobInfo) throws IOException {
      if (Files.size(uploadFrom) > 1_000_000) {
        // When content is not available or large (1MB or more) it is recommended
        // to write it in chunks via the blob's channel writer.
        try (WriteChannel writer = storage.writer(blobInfo)) {
          byte[] buffer = new byte[1024];
          try (InputStream input = Files.newInputStream(uploadFrom)) {
            int limit;
            while ((limit = input.read(buffer)) >= 0) {
              try {
                writer.write(ByteBuffer.wrap(buffer, 0, limit));
              } catch (Exception ex) {
                ex.printStackTrace();
              }
            }
          }
        }
      } else {
        byte[] bytes = Files.readAllBytes(uploadFrom);
        // create the blob in one request.
        storage.create(blobInfo, bytes);
      }
      System.out.println("Blob was created");
    }

And there is another example of big file uploading with using crc32c in the ticket https://github.com/googleapis/google-cloud-java/issues/6416

Please add to code samples with a correct and efficient example of uploading big files (big means gigabytes).

Storage and url encoding

Environment details

  1. Specify the API at the beginning of the title (for example, "BigQuery: ...")
    General, Core, and Other are also allowed as types
  • Storage
  1. OS type and version:
  • MacOS Catalina
  1. Java version:
  • 11.0.4
  • Scala 2.13.1
  1. google-cloud-java version(s):
  • 1.103.1

Steps to reproduce

  1. upload a file named 1+1 2 to my-bucket
  2. list files in my-bucket
  3. for each file, download

Code example

storage.list(bucket).iterateAll().asScala.foreach(blob => blob.downloadTo(blob.getName()))

Stack trace

com.google.cloud.RetryHelper$RetryHelperException: com.google.cloud.storage.StorageException: 404 Not Found
No such object: eli-test/1+1+2
	at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:54)
	at com.google.cloud.storage.Blob.downloadTo(Blob.java:233)
	at com.google.cloud.storage.Blob.downloadTo(Blob.java:217)
	at Main$.$anonfun$download$1(Main.scala:73)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
	at zio.blocking.Blocking$Service.$anonfun$effectBlocking$5(Blocking.scala:134)
	at zio.internal.FiberContext.evaluateNow(FiberContext.scala:386)
	at zio.internal.FiberContext.$anonfun$fork$2(FiberContext.scala:655)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: com.google.cloud.storage.StorageException: 404 Not Found
No such object: eli-test/1+1+2
	at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:229)
	at com.google.cloud.storage.spi.v1.HttpStorageRpc.read(HttpStorageRpc.java:675)
	at com.google.cloud.storage.Blob$2.run(Blob.java:238)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
	at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
	at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
	... 10 more
Caused by: com.google.api.client.http.HttpResponseException: 404 Not Found
No such object: eli-test/1+1+2
	at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1113)
	at com.google.api.client.googleapis.media.MediaHttpDownloader.executeCurrentRequest(MediaHttpDownloader.java:255)
	at com.google.api.client.googleapis.media.MediaHttpDownloader.download(MediaHttpDownloader.java:185)
	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeMediaAndDownloadTo(AbstractGoogleClientRequest.java:685)
	at com.google.api.services.storage.Storage$Objects$Get.executeMediaAndDownloadTo(Storage.java:6996)
	at com.google.cloud.storage.spi.v1.HttpStorageRpc.read(HttpStorageRpc.java:671)
	... 15 more

External references such as API reference guides used

  • ?

Any additional information below

If I try to encode the blob name using blob.toBuilder.setBlodId(...).build, I see double encoding: 1%2B1%202 => 1%252B1%25202
Also, using blob.signUrl produces a valid url that allows download

Can't obtain info on blob with space in name

Thanks for stopping by to let us know something could be better!

Environment details

  1. Manjaro 19.0.2
  2. Java version: Java 11
  3. datastore version(s): 1.102.3

Steps to reproduce

  1. Create blob with whitespace in it's name: some/another/white space.txt
  2. Obtain info about it with storage.get(bucketName, "some/another/white space.txt")
  3. Obtain length from there: NPE occures

While debugging we can find followin error:

{
  "code" : 404,
  "errors" : [ {
    "domain" : "global",
    "message" : "No such object: test-52565030-4e60-11ea-90d4-b3cfcebcc4ca/some/another/white+space.txt",
    "reason" : "notFound"
  } ],
  "message" : "No such object: test-52565030-4e60-11ea-90d4-b3cfcebcc4ca/some/another/white+space.txt"
}

Thanks!

Read time error when uploading large files

Hello,
I'm trying to upload large files.
I've tried to use "WriterChannel" (because as mentioned in the documentation, it's better to use it with large files), but when I used it, I always got this error:

com.google.cloud.storage.StorageException: Read timed out
	at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:226)
	at com.google.cloud.storage.spi.v1.HttpStorageRpc.write(HttpStorageRpc.java:773)
	at com.google.cloud.storage.BlobWriteChannel$1.run(BlobWriteChannel.java:60)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
	at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
	at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
	at com.google.cloud.storage.BlobWriteChannel.flushBuffer(BlobWriteChannel.java:53)
	at com.google.cloud.BaseWriteChannel.flush(BaseWriteChannel.java:112)
	at com.google.cloud.BaseWriteChannel.write(BaseWriteChannel.java:139)
	at de.intenta.storage.cloud.google.GoogleStorageWriter.writeObject(GoogleStorageWriter.java:139)
	at de.intenta.storage.cloud.google.GoogleStorageWriterTest.testWriteObjectBigFiles(GoogleStorageWriterTest.java:274)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
Caused by: java.net.SocketTimeoutException: Read timed out
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
	at java.net.SocketInputStream.read(SocketInputStream.java:171)
	at java.net.SocketInputStream.read(SocketInputStream.java:141)
	at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
	at sun.security.ssl.InputRecord.read(InputRecord.java:503)
	at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
	at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
	at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
	at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
	at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587)
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
	at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
	at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:347)
	at com.google.api.client.http.javanet.NetHttpResponse.<init>(NetHttpResponse.java:36)
	at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:144)
	at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:79)
	at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:995)
	at com.google.cloud.storage.spi.v1.HttpStorageRpc.write(HttpStorageRpc.java:750)
	... 75 more

Cloud Storage: getStorageClass in SetStorageClassLifecycleAction is not visible

getStorageClass() method in SetStorageClassLifecycleAction class is only visible inside the package. Its not accessible outside the package.

I am trying to read storage class set by the lifecycle rule through SDK in the following way. looks like its not possible. Can you suggest any other alternative to access if this is not possible.

f (lifecycleAction.getActionType().equals(SetStorageClassLifecycleAction.TYPE)) {
     return ((SetStorageClassLifecycleAction)lifeCycleRule.getAction()).getStorageClass(); 
}

// Compiler error : The method getStorageClass() from the type BucketInfo.LifecycleRule.SetStorageClassLifecycleAction is not visible

Environment details

OS type and version: Windows 10
Java version: JDK 11
sdk version: google-cloud-storage-1.7.8.0.jar

user/password proxy support for google cloud storage client

Hi,

I'm writing a client for google cloud storage using the java jar google-cloud-storage version 1.37.1
maven excerpt

com.google.cloud
google-cloud-storage
1.37.1

I want to configure it work with proxy which uses username and password, but the code does not work.
I configure my code to use proxy using System.setProperty method
System.setProperty("http.proxyUser", my_username);
System.setProperty("http.proxyPassword", my_password);

However, this does not work.
If I work with proxy without username/password, it works fine.
Also using proxy which expect to get SSL connections, does not work.

My question is:

  1. What shall I do to enable working with proxy with username/password, and with SSL.
  2. I know I'm using an old jar, if I upgrade it to the latest one, will this give me support for proxy with user/password?

Thanks.

Eliyahu

Cloud Storage recently stopped handling the RFC 3986 URI encoding by it self

Environment details

Java version: 8
google-cloud-java version(s): libraries-bom:3.3.0

Steps to reproduce

1- Upload a file with spaces in its name
2- Read that file

Code example

StorageOptions.getDefaultInstance().getService().create(BlobInfo.newBuilder("local-files", "da8c9305-f275-4a52-affb-a3ac6e583dc9/p2p eq transit- Transit time estimator.xlsx").build(), bytes);
StorageOptions.getDefaultInstance().getService().readAllBytes("local-files", "da8c9305-f275-4a52-affb-a3ac6e583dc9/p2p eq transit- Transit time estimator.xlsx");

Stack trace

com.google.api.client.googleapis.json.GoogleJsonResponseException: 404 Not Found
No such object: local-files/da8c9305-f275-4a52-affb-a3ac6e583dc9/p2p+eq+transit-+Transit+time+estimator.xlsx  404 Not Found
No such object: local-files/da8c9305-f275-4a52-affb-a3ac6e583dc9/p2p+eq+transit-+Transit+time+estimator.xlsx
com.google.api.client.googleapis.json.GoogleJsonResponseException: 404 Not Found
No such object: local-files/da8c9305-f275-4a52-affb-a3ac6e583dc9/p2p+eq+transit-+Transit+time+estimator.xlsx
	at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:150)
	at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
	at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:443)
	at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1108)
	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:541)
	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:474)
	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeMedia(AbstractGoogleClientRequest.java:502)
	at com.google.api.services.storage.Storage$Objects$Get.executeMedia(Storage.java:7006)
	at com.google.cloud.storage.spi.v1.HttpStorageRpc.load(HttpStorageRpc.java:630)
	at com.google.cloud.storage.StorageImpl$16.call(StorageImpl.java:589)
	at com.google.cloud.storage.StorageImpl$16.call(StorageImpl.java:586)
	at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
	at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
	at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
	at com.google.cloud.storage.StorageImpl.readAllBytes(StorageImpl.java:585)
	at com.google.cloud.storage.StorageImpl.readAllBytes(StorageImpl.java:577)

External references such as API reference guides used

https://cloud.google.com/storage/docs/request-endpoints#encoding

According to this the encoding is typically handled for you by client libraries, such as the Cloud Storage Client Libraries, so you can pass the raw object name to them.

Any additional information below

This stopped working recently After Upgrading the libraries-bom from 3.1.0 to 3.3.0
The object exists in the bucket with spaces.

Annotation 2020-01-13 165401

Storage.get(bucketname,filepath) returns null when filename has space (1.103.1,java8)

Steps to reproduce

  1. create a file inside a bucket which has space in its name (eg: hotel booking.pdf)
  2. try to download the blob using storage.get(bucket,filepath)

Code example

log.info("Attempting to download file {} from bucket {}", filePath,bucketName);
    Blob blob = storage.get(bucketName, filePath);
    if (Objects.isNull(blob)) {
      log.error("file not present for {}", fileName);
      //            throw new FileNotFoundException("file not present for: {}", fileName);
    }

Stack trace

file not present for 'hotel booking.pdf'

External references such as API reference guides

  • ?

Any additional information below

Following these steps guarantees the quickest resolution possible.

Thanks!

Cloud Storage: Unable to get blobs with space in their name after upgrading to 1.103.0

  1. API: Cloud storage
  2. OS type and version: OSX
  3. Java version: 1.8.0_161
  4. storage version(s): 1.103.0

Steps to reproduce

  1. Upload a blob that contains a space in its name.
  2. Try to get it.

Code example

Storage gcs = StorageOptions.getDefaultInstance().getService();
gcs.get("my_bucket", "my blob")
// returns null

Any additional information below

This works in version 1.102.0 but returns null in 1.103.0.
I've tried replacing the space with %20 or +, but that didn't work either.

Cloud Storage: Reduced content of gzipped objects in google storage

Environment details

  1. API: Cloud Storage
  2. OS type and version: macOS Catalina. Version: 10.15.1
  3. Java version: 1.8.0_161
  4. google-cloud-java version(s): 1.102.0

Description

There are several compressed xml files in google storage bucket with the following encoding and content-type:

  Content-Encoding:       gzip
  Content-Type:           text/xml

We use Storage::list method to list all blobs in the bucket and Blob::getContent to get object's content.
But sometimes reduced blob's content may be returned.

Steps to reproduce

  1. Try to get each blob content from bucket 10-15 times.
  2. See that in one of the run the size of blob can be smaller than in other runs.

Code example

    public static void main(String[] args) {
        Map<String, Integer> expectedFileSizes = new HashMap<>();
        expectedFileSizes.put("2019/10/10/11/0eaba27bcb.xml.gz", 1137018);
        expectedFileSizes.put("2019/10/10/11/f74aa52b33.xml.gz", 8067303);
        expectedFileSizes.put("2019/10/10/11/e7342793de.xml.gz", 5123050);

        int runs = 15, run = 0;
        while (run++ < runs) {
            System.out.println("Run: " + run);
            Iterable<Blob> blobs = storage.list(bucket, BlobListOption.prefix("2019/10/10/11")).iterateAll();
            for (Blob blob : blobs) {
                String name = blob.getName();
                byte[] content = blob.getContent();

                int expectedSize = expectedFileSizes.get(name);
                int actualSize = content.length;

                System.out.println("\tname: " + name + ", size: " + actualSize);
                assertEquals(expectedSize, actualSize, "Run: " + run + ": incorrect size! " + name);
            }
        }
    }

Stack trace

Run: 1
	name: 2019/10/10/11/0eaba27bcb.xml.gz, size: 1137018
	name: 2019/10/10/11/e7342793de.xml.gz, size: 5123050
	name: 2019/10/10/11/f74aa52b33.xml.gz, size: 8067303
Run: 2
	name: 2019/10/10/11/0eaba27bcb.xml.gz, size: 1137018
	name: 2019/10/10/11/e7342793de.xml.gz, size: 5123050
	name: 2019/10/10/11/f74aa52b33.xml.gz, size: 8067303
Run: 3
	name: 2019/10/10/11/0eaba27bcb.xml.gz, size: 1137018
	name: 2019/10/10/11/e7342793de.xml.gz, size: 5123050
	name: 2019/10/10/11/f74aa52b33.xml.gz, size: 7583612
Exception in thread "main" org.opentest4j.AssertionFailedError: Run: 3: incorrect size! 2019/10/10/11/f74aa52b33.xml.gz ==> expected: <8067303> but was: <7583612>
	at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55)
	at org.junit.jupiter.api.AssertionUtils.failNotEqual(AssertionUtils.java:62)
	at org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:150)
	at org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:542)
	at Main.main(Main.java:42)

Process finished with exit code 1

Http 400: invalid User Project should include the invalid value

I'm not sure where to report this since the http response isn't being generated in this library, but I thought you could point me in the right place.

I got an exception that didn't have as much information as I would like.

      {
          "code" : 400,
          "errors" : [ {
            "domain" : "global",
            "message" : "User project specified in the request is invalid.",
            "reason" : "invalid"
          } ],
          "message" : "User project specified in the request is invalid."
        }

It would be really helpful if the exception message included the invalid project that was specified. In my case it was a dumb typo that caused the problem. It took longer to diagnose because it wasn't included in the error message.

It doesn't seem like it would be easy to catch and improve the error message at the calling site because this can happen in many places, so the code that generates http return code should include it. I don't know where to find that code though to patch or open an issue against.

Full stack trace:

om.google.cloud.storage.StorageException: User project specified in the request is invalid.
        at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:229)
        at com.google.cloud.storage.spi.v1.HttpStorageRpc.list(HttpStorageRpc.java:370)
        at com.google.cloud.storage.StorageImpl$8.call(StorageImpl.java:373)
        at com.google.cloud.storage.StorageImpl$8.call(StorageImpl.java:370)
        at shaded.cloud_nio.com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
        at shaded.cloud_nio.com.google.cloud.RetryHelper.run(RetryHelper.java:76)
        at shaded.cloud_nio.com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
        at com.google.cloud.storage.StorageImpl.listBlobs(StorageImpl.java:369)
        at com.google.cloud.storage.StorageImpl.list(StorageImpl.java:325)
        at com.google.cloud.storage.contrib.nio.CloudStoragePath.seemsLikeADirectoryAndUsePseudoDirectories(CloudStoragePath.java:107)
        at com.google.cloud.storage.contrib.nio.CloudStorageFileSystemProvider.readAttributes(CloudStorageFileSystemProvider.java:810)
        at java.nio.file.Files.readAttributes(Files.java:1737)
        at java.nio.file.Files.isRegularFile(Files.java:2229)
        at htsjdk.samtools.SamFiles.lookForIndex(SamFiles.java:73)
        at htsjdk.samtools.SamFiles.findIndex(SamFiles.java:39)
        at htsjdk.samtools.SamReaderFactory.open(SamReaderFactory.java:103)
        at org.broadinstitute.hellbender.engine.ReadsDataSource.<init>(ReadsDataSource.java:227)
        at org.broadinstitute.hellbender.engine.ReadsDataSource.<init>(ReadsDataSource.java:162)
        at org.broadinstitute.hellbender.engine.ReadsDataSource.<init>(ReadsDataSource.java:118)
        at org.broadinstitute.hellbender.engine.ReadsDataSource.<init>(ReadsDataSource.java:87)
        at org.broadinstitute.hellbender.engine.spark.datasources.ReadsSparkSource.getHeader(ReadsSparkSource.java:174)
        at org.broadinstitute.hellbender.engine.spark.GATKSparkTool.initializeReads(GATKSparkTool.java:562)
        at org.broadinstitute.hellbender.engine.spark.GATKSparkTool.initializeToolInputs(GATKSparkTool.java:541)
        at org.broadinstitute.hellbender.engine.spark.GATKSparkTool.runPipeline(GATKSparkTool.java:531)
        at org.broadinstitute.hellbender.engine.spark.SparkCommandLineProgram.doWork(SparkCommandLineProgram.java:31)
        at org.broadinstitute.hellbender.cmdline.CommandLineProgram.runTool(CommandLineProgram.java:139)
        at org.broadinstitute.hellbender.cmdline.CommandLineProgram.instanceMainPostParseArgs(CommandLineProgram.java:191)
        at org.broadinstitute.hellbender.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:210)
        at org.broadinstitute.hellbender.Main.runCommandLineProgram(Main.java:162)
        at org.broadinstitute.hellbender.Main.instanceMain(Main.java:148)
        at org.broadinstitute.hellbender.Main.instanceMain(Main.java:189)
        at org.broadinstitute.hellbender.CommandLineProgramTest.runCommandLine(CommandLineProgramTest.java:27)
        at org.broadinstitute.hellbender.testutils.CommandLineProgramTester.runCommandLine(CommandLineProgramTester.java:101)
        at org.broadinstitute.hellbender.tools.spark.pipelines.PrintReadsSparkIntegrationTest.testGCSInputsAndOutputsWithSparkNio(PrintReadsSparkIntegrationTest.java:59)
        Caused by:
        shaded.cloud_nio.com.google.api.client.googleapis.json.GoogleJsonResponseException: 400 Bad Request
        {
          "code" : 400,
          "errors" : [ {
            "domain" : "global",
            "message" : "User project specified in the request is invalid.",
            "reason" : "invalid"
          } ],
          "message" : "User project specified in the request is invalid."
        }
            at shaded.cloud_nio.com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:150)
            at shaded.cloud_nio.com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
            at shaded.cloud_nio.com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
            at shaded.cloud_nio.com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:451)
            at shaded.cloud_nio.com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1089)
            at shaded.cloud_nio.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:549)
            at shaded.cloud_nio.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:482)
            at shaded.cloud_nio.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:599)
            at com.google.cloud.storage.spi.v1.HttpStorageRpc.list(HttpStorageRpc.java:360)
            ... 32 more

Deprecated MD5 hash function

[WARNING] /home/elharo/java-storage/google-cloud-storage/src/main/java/com/google/cloud/storage/StorageImpl.java:[149,57] md5() in com.google.common.hash.Hashing has been deprecated
[WARNING] /home/elharo/java-storage/google-cloud-storage/src/main/java/com/google/cloud/storage/StorageImpl.java:[165,57] md5() in com.google.common.hash.Hashing has been deprecated

Reactive support for RxJava/Spring Reactor

Is there any way to use this library with project reactor/ RxJava or any implementation which we can use to make it reactive?

For firestore this Github doc provides details about reactive firestore

Describe the solution you'd like
Use google-cloud-storage with reactive frameworks like spring-webflux and RxJava

Describe alternatives you've considered
There is a suggestion in reference docs to wrap this with elastic scheduler.

Is there any recommended way to handle this?

Synthesis failed for java-storage

Hello! Autosynth couldn't regenerate java-storage. ๐Ÿ’”

Here's the output from running synth.py:

Cloning into 'working_repo'...
Switched to branch 'autosynth'
Running synthtool
['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--']
synthtool > Executing /tmpfs/src/git/autosynth/working_repo/synth.py.
On branch autosynth
nothing to commit, working tree clean
HEAD detached at FETCH_HEAD
nothing to commit, working tree clean
synthtool > Wrote metadata to synth.metadata.
Traceback (most recent call last):
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 102, in <module>
    main()
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 94, in main
    spec.loader.exec_module(synth_module)  # type: ignore
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
  File "/tmpfs/src/git/autosynth/working_repo/synth.py", line 21, in <module>
    templates = common_templates.java_library()
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/common.py", line 75, in java_library
    return self._generic_library("java_library", **kwargs)
  File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/common.py", line 43, in _generic_library
    if not kwargs["metadata"]["samples"]:
KeyError: 'samples'

Synthesis failed

Google internal developers can see the full log here.

Google Cloud Storage | CRC32c of large files.

Hello,

I'm developing a service to upload files between 1GB and 2GB to Google Cloud Storage.

There's a way to calculate the CRC32hash while i'm writing the stream in the WriteChannel ?

The problem is that always i need to calculate the hash of the whole file before create the blobInfo with the calculated hash.

Here is my current code.

Hash

Thanks everyone!

google-cloud-storage: Path segment of signed URLs not correct for objects starting with a forward slash.

google-cloud-storage v0.120

The storage library doesn't create signed URLs correctly for objects starting with a forward slash. (I kept this behavior in a recent refactor I did, but recently found that I should have fixed it instead.) See this block of code:
https://github.com/googleapis/google-cloud-java/blob/afe98d2fb3535d5236b4d6377a5026a1977a9ce3/google-cloud-clients/google-cloud-storage/src/main/java/com/google/cloud/storage/StorageImpl.java#L729

where the logic also existed before it was shuffled around in a recent refactor:
https://github.com/googleapis/google-cloud-java/blob/6de998cde1ab542d95af740e310ae58cb7c317a0/google-cloud-clients/google-cloud-storage/src/main/java/com/google/cloud/storage/StorageImpl.java#L666

All slashes should be preserved; the library currently checks for one and removes it, although I can't determine why. I saw the original issue (See googleapis/google-cloud-java#1008 and googleapis/google-cloud-java#1006), but the comments there were very vague and didn't seem to outline an example URL for which this was valid behavior. I'm guessing it made sense for the way the library was written at the time, but today, it prevents users from correctly forming a signed URL for objects whose names start with a forward slash.


Current (wrong) behavior:

The object "/foo" in the bucket "bucket" ends up having a signed URL that starts with one of these strings:

https://storage.googleapis.com/bucket/foo
https://bucket.storage.googleapis.com/foo

Correct behavior:

The object "/foo" in the bucket "bucket" should have a signed URL that starts with one of these strings (notice the preserved forward slashes in the resource name):

https://storage.googleapis.com/bucket//foo
https://bucket.storage.googleapis.com//foo

Blob.reload() does not work as intuitively expected

Documentation for Blob.writer(BlobWriterOption...) recommends the following way to upload information:

 byte[] content = "Hello, World!".getBytes(UTF_8);
 try (WriteChannel writer = blob.writer()) {
   try {
     writer.write(ByteBuffer.wrap(content, 0, content.length));
   } catch (Exception ex) {
     // handle exception
   }
 }

But doc is silent on how to renew the blob object, which is a bit tricky because neither blob.reload() nor storage.getBlob(blob.getBlobId()) does not help. Behavior varies depending on versioning setting for the bucket.

If versioning is enabled:

Attempt to reload as recommended:
blob.reload(Blob.BlobSourceOption.generationNotMatch()) causes StorageException '304 Not Modified'

blob.reload() returns the same blob.
storage.get(blob.getBlobId()) returns the same blob (that seems very confusing).
And only storage.get(blobId) returns the updated blob.

If versioning is Suspended:
before upload: blob.getContent() returns an empty array
After upload: blob.getContent() throws StorageException: 404 Not Found , No such object
storage.get(blob.getBlobId()), blob.reload() and blob.reload(Blob.BlobSourceOption.generationNotMatch()) return null.

And again only storage.get(blobId) returns the updated blob.

Synthesis failed for java-storage

Hello! Autosynth couldn't regenerate java-storage. ๐Ÿ’”

Here's the output from running synth.py:

Cloning into 'working_repo'...
Switched to branch 'autosynth'
Traceback (most recent call last):
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 256, in <module>
    main()
  File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 196, in main
    last_synth_commit_hash = get_last_metadata_commit(args.metadata_path)
  File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 149, in get_last_metadata_commit
    text=True,
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 403, in run
    with Popen(*popenargs, **kwargs) as process:
TypeError: __init__() got an unexpected keyword argument 'text'

Google internal developers can see the full log here.

StorageException: 404 Not Found

We make use of Google's JAVA client library to create storage client using which we get a particular blob by: storageClient.get(blobId).

We are facing an issue - calling the above method intermittently leads to "StorageException: 404 Not Found". Even when the objects exists and can be seen from console.

Below is the stack-trace:

   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT **Caused by: com.google.cloud.storage.StorageException: 404 Not Found**
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT No such object: <our object name - we would not want to disclose>
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:220) ~[google-cloud-storage-1.24.1.jar!/:1.24.1]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.cloud.storage.spi.v1.HttpStorageRpc.load(HttpStorageRpc.java:588) ~[google-cloud-storage-1.24.1.jar!/:1.24.1]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.cloud.storage.StorageImpl$16.call(StorageImpl.java:464) ~[google-cloud-storage-1.24.1.jar!/:1.24.1]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.cloud.storage.StorageImpl$16.call(StorageImpl.java:461) ~[google-cloud-storage-1.24.1.jar!/:1.24.1]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:89) ~[gax-1.23.0.jar!/:1.23.0]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.cloud.RetryHelper.run(RetryHelper.java:74) ~[google-cloud-core-1.24.1.jar!/:1.24.1]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:51) ~[google-cloud-core-1.24.1.jar!/:1.24.1]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.cloud.storage.StorageImpl.readAllBytes(StorageImpl.java:461) ~[google-cloud-storage-1.24.1.jar!/:1.24.1]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.cloud.storage.Blob.getContent(Blob.java:455) ~[google-cloud-storage-1.24.1.jar!/:1.24.1]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.sap.hcp.osaas.cfbroker.utils.GCPUtils.getBlob(GCPUtils.java:60) ~[classes/:?]

   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	... 89 more
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT **Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 404 Not Found**
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT No such object: <our object name we would not want to disclose>
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146) ~[google-api-client-1.23.0.jar!/:1.23.0]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113) ~[google-api-client-1.23.0.jar!/:1.23.0]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40) ~[google-api-client-1.23.0.jar!/:1.23.0]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:321) ~[google-api-client-1.23.0.jar!/:1.23.0]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1065) ~[google-http-client-1.23.0.jar!/:1.23.0]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419) ~[google-api-client-1.23.0.jar!/:1.23.0]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352) ~[google-api-client-1.23.0.jar!/:1.23.0]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeMedia(AbstractGoogleClientRequest.java:380) ~[google-api-client-1.23.0.jar!/:1.23.0]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.api.services.storage.Storage$Objects$Get.executeMedia(Storage.java:6189) ~[google-api-services-storage-v1-rev114-1.23.0.jar!/:?]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.cloud.storage.spi.v1.HttpStorageRpc.load(HttpStorageRpc.java:584) ~[google-cloud-storage-1.24.1.jar!/:1.24.1]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.cloud.storage.StorageImpl$16.call(StorageImpl.java:464) ~[google-cloud-storage-1.24.1.jar!/:1.24.1]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.cloud.storage.StorageImpl$16.call(StorageImpl.java:461) ~[google-cloud-storage-1.24.1.jar!/:1.24.1]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:89) ~[gax-1.23.0.jar!/:1.23.0]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.cloud.RetryHelper.run(RetryHelper.java:74) ~[google-cloud-core-1.24.1.jar!/:1.24.1]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:51) ~[google-cloud-core-1.24.1.jar!/:1.24.1]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at com.google.cloud.storage.StorageImpl.readAllBytes(StorageImpl.java:461) ~[google-cloud-storage-1.24.1.jar!/:1.24.1]
   2018-06-25T15:25:38.58+0530 [APP/PROC/WEB/1] OUT 	at **com.google.cloud.storage.Blob.getContent**(Blob.java:455) ~[google-cloud-storage-1.24.1.jar!/:1.24.1]

Can there be anything that we might be missing? or is it an issue on the library side?

Thanks,
Swati Jain

Passing RSA wrapped CSEK not possible

Using the google-cloud-storage api I've no possibilities to define that the passed AES256 Key is RSA wrapped.

Storage.BlobWriteOption blobWriteOption = Storage.BlobWriteOption.encryptionKey(str); Storage.BlobTargetOption blobTargetOption = Storage.BlobTargetOption.encryptionKey(str);
Using a RSA wrapped key causes a "Bad Request"

{
"error": {
"errors": [
{
"domain": "global",
"reason": "customerEncryptionKeyFormatIsInvalid",
"message": "Missing an encryption key, or it is not base64 encoded, or it does not meet the required length of the encryption algorithm.",
"extendedHelp": "https://cloud.google.com/storage/docs/encryption#customer-supplied_encryption_keys"
}
],
"code": 400,
"message": "Missing an encryption key, or it is not base64 encoded, or it does not meet the required length of the encryption algorithm."
}
}

Thanks!
Boris

Dependency convergence issues that surface when using maven-enforcer-plugin with DependencyConvergence

The java-storage/google-cloud-storage module has a dependency tree that mixes versions of the same artifact (via transitive dependencies). This is causing me and my team grief, because we use the maven-enforcer-plugin with <DependencyConvergence/> configured (in order to keep our internal and transient dependencies consistent);

In my opinion the best long term solution to this issue would be:

  1. Enable <DependencyConvergence/> in the parent pom: java-shared-config/pom.xml#L225
  2. Explicitly choose version for the conflicting dependencies.
    a. by adding the dependency in dependency management section
    b. by doing the appropriate bom import in the dependency management section

Steps to reproduce

  1. Clone this project
  2. Add the following plugin and config in the root pom.xml of the project:
...
  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-enforcer-plugin</artifactId>
        <executions>
          <execution>
            <id>enforce</id>
            <goals>
              <goal>enforce</goal>
            </goals>
            <configuration>
              <rules>
                <DependencyConvergence/>
              </rules>
            </configuration>
          </execution>
        </executions>
      </plugin>
      ...
    </plugins>
   ...
  </build>
  1. run maven install

The convergency issue report in full:

These are the conflicting versions in this module:

[WARNING]
Dependency convergence error for org.hamcrest:hamcrest-core:1.3 paths to dependency are:
+-com.google.cloud:google-cloud-storage:1.103.2-SNAPSHOT
  +-junit:junit:4.13
    +-org.hamcrest:hamcrest-core:1.3
and
+-com.google.cloud:google-cloud-storage:1.103.2-SNAPSHOT
  +-org.hamcrest:hamcrest-core:2.2

[WARNING]
Dependency convergence error for com.google.errorprone:error_prone_annotations:2.3.4 paths to dependency are:
+-com.google.cloud:google-cloud-storage:1.103.2-SNAPSHOT
  +-com.google.guava:guava:28.2-android
    +-com.google.errorprone:error_prone_annotations:2.3.4
and
+-com.google.cloud:google-cloud-storage:1.103.2-SNAPSHOT
  +-com.google.protobuf:protobuf-java-util:3.11.3
    +-com.google.errorprone:error_prone_annotations:2.3.4
and
+-com.google.cloud:google-cloud-storage:1.103.2-SNAPSHOT
  +-io.grpc:grpc-api:1.27.0
    +-com.google.errorprone:error_prone_annotations:2.3.4
and
+-com.google.cloud:google-cloud-storage:1.103.2-SNAPSHOT
  +-io.grpc:grpc-netty-shaded:1.27.0
    +-io.grpc:grpc-core:1.27.0
      +-com.google.errorprone:error_prone_annotations:2.3.4
and
+-com.google.cloud:google-cloud-storage:1.103.2-SNAPSHOT
  +-com.google.truth:truth:1.0.1
    +-com.google.errorprone:error_prone_annotations:2.3.3

[WARNING]
Dependency convergence error for com.google.auto.value:auto-value-annotations:1.7 paths to dependency are:
+-com.google.cloud:google-cloud-storage:1.103.2-SNAPSHOT
  +-com.google.auth:google-auth-library-oauth2-http:0.20.0
    +-com.google.auto.value:auto-value-annotations:1.7
and
+-com.google.cloud:google-cloud-storage:1.103.2-SNAPSHOT
  +-com.google.truth:truth:1.0.1
    +-com.google.auto.value:auto-value-annotations:1.6.3

[WARNING]
Dependency convergence error for com.google.api-client:google-api-client:1.30.8 paths to dependency are:
+-com.google.cloud:google-cloud-storage:1.103.2-SNAPSHOT
  +-com.google.api-client:google-api-client:1.30.8
and
+-com.google.cloud:google-cloud-storage:1.103.2-SNAPSHOT
  +-com.google.apis:google-api-services-storage:v1-rev20191011-1.30.3
    +-com.google.api-client:google-api-client:1.30.3
and
+-com.google.cloud:google-cloud-storage:1.103.2-SNAPSHOT
  +-com.google.cloud:google-cloud-core-http:1.91.3
    +-com.google.api-client:google-api-client:1.30.4

[WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence failed with message:
Failed while enforcing releasability. See above detailed error message.

Thanks!

Unable to add lifecycle rules if the location is not specified during bucket creation

Hi,
When I try to add the lifecycle rules to a bucket without specifying "location" attribute for the bucket, bucket creation is failing with error "invalid location constraint". Where as bucket creation works without location field if lifecycle rules are not added as part of the request.

Used Json API with help of postman, got the same error

Request:

POST /storage/v1/b?project={project} HTTP/1.1
Host: www.googleapis.com
Content-Type: application/json
Authorization: Bearer {token}

{
  "name": "ia-gcs-lifecyle-test",
  "lifecycle": {
    "rule": [
      {
        "action": {
          "type": "SetStorageClass",
          "storageClass": "COLDLINE"
        },
        "condition": {
          "age": 29
        }
      }
    ]
  }
}

Response:

{
    "error": {
        "errors": [
            {
                "domain": "global",
                "reason": "invalid",
                "message": "Invalid location constraint \"\""
            }
        ],
        "code": 400,
        "message": "Invalid location constraint \"\""
    }
}

Environment details
OS type and version: Windows 10
Java version: JDK 11
sdk version: google-cloud-storage-1.7.8.0.jar

Cloud Storage: provide option to disableGzipContent on InputStream variants

Is your feature request related to a problem? Please describe.

I have a use case that necessitates using the Storage/Bucket write method variants with InputStream arguments. The inputs for my use case to store in Cloud Storage are:

  • predominantly already compressed,
  • have a wide range of file sizes, from a few KB to GB+,
  • we don't always know the size in advance.

Describe the solution you'd like

Given that our content is already compressed, I would prefer to avoid spending the CPU time on compressing the content again en route to the Bucket.

Describe alternatives you've considered

We have used the byte[] variants, with BlobTargetOption.disableGzipContent and the Compose request. This is suitable but leaves us with a tuning challenge:

  • Buffering these streams into memory (byte[] chunks) requires additional heap space be available
  • Compose is limited to only 32 chunks. If the file content we intend to store is far larger than 32 * bufferSize, we will upload 31 small sized chunks and 1 large chunk that we have to use the InputStream variant for anyways, and pay the additional overhead of gzip compression.

Additional context

I have done some exploration and it appears that the values on the BlobWriteOption (used on InputStream variants) and BlobTargetOption (used on byte[] variants) enums both are translated into StorageRpc.Option. I have a small contribution to offer in the form of a pull request to follow.

Randomly Receiving com.google.cloud.storage.StorageException: Connection reset

We randomly receiving com.google.cloud.storage.StorageException: Connection reset. in some requests when uploading and downloading file from google cloud storage.
Mainly this exception is coming:

  1. while fetching bucket instance
  2. while creating a new ACL entry on the specified blob.
  • OS: Windows
  • Java version: 8
  • google-cloud-java version(s): 1.2.1,

Stacktrace

Caused by: com.google.cloud.storage.StorageException: Connection reset
at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:189)
at com.google.cloud.storage.spi.v1.HttpStorageRpc.get(HttpStorageRpc.java:309)
at com.google.cloud.storage.StorageImpl$4.call(StorageImpl.java:169)
at com.google.cloud.storage.StorageImpl$4.call(StorageImpl.java:166)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:89)
at com.google.cloud.RetryHelper.run(RetryHelper.java:74)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:51)
at com.google.cloud.storage.StorageImpl.get(StorageImpl.java:165)
... 19 common frames omitted
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at sun.security.ssl.InputRecord.readFully(Unknown Source)
at sun.security.ssl.InputRecord.read(Unknown Source)
at sun.security.ssl.SSLSocketImpl.readRecord(Unknown Source)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source)
at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
at sun.net.www.protocol.https.HttpsClient.afterConnect(Unknown Source)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(Unknown Source)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(Unknown Source)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:981)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at com.google.cloud.storage.spi.v1.HttpStorageRpc.get(HttpStorageRpc.java:307)
... 26 common frames omitted

1. code snippet

  Storage storage = StorageOptions.getDefaultInstance()
      .getService();
  // get bucket instance from storage
  Bucket bucket = storage.get(bucketName);

2. code snippet

// get storage instance
Storage storage = StorageOptions.getDefaultInstance()
.getService();
// get bucket instance from storage
Bucket bucket = storage.get(bucketName);
BlobId blobId = BlobId.of(bucket.getName(), subDirectory + "/" + name);
BlobInfo blobInfo = BlobInfo.newBuilder(blobId)
.build();
try (WriteChannel writer = storage.writer(blobInfo)) {
writer.write(ByteBuffer.wrap(content, 0, content.length));
}
if (markPublic) {
// mark the uploaded file as public
Acl acl = storage.createAcl(blobId, Acl.of(Acl.User.ofAllUsers(), Acl.Role.READER));
return "https://storage.googleapis.com/" + blobInfo.getBucket() + "/" + blobId.getName();
}

Additional Information

I have also white listed .googleapis.com and *.google.com domains. But some of the calls from SDK are calling IP address directly like https://216.58.220.10 because of that my network is restricting that call.

Is there a way so that I can configure something in Storage SDK so that it should call only those APIs which are covered with google domain. I have restrictions on my firewall so that data packets from random IP addresses should not get transferred.
Below is the attached image that shows some requests are going to google.com, while some requests are going to IP address(google owned) directly. Requests that are going directly to IP address are throwing this exception.
kwlvx

Thanks!

Cloud Storage: Removing lifecyclerules from bucket is not working

I was successfully able to add the lifecycle rules to a bucket.

Storage storage=/* connection*/;
Builder builder = BucketInfo.newBuilder(name);
builder.setStorageClass(COLDLINE);
builder.setLifecycleRules( lifecycleRules);
storage.create(builder.build());

But seems there is no way to remove the lifecycle rules from bucket

Storage storage=/* connection*/;
Builder builder = BucketInfo.newBuilder(name);
builder.setLifecycleRules(null);
storage.update(builder.build());

Looks like there is no support in SDK for PATCH update where as Rest API for lifecycle configuration is available .

https://cloud.google.com/storage/docs/managing-lifecycles

curl -X PATCH --data-binary @[LIFECYCLE_CONFIG_FILE].json \
  -H "Authorization: Bearer [OAUTH2_TOKEN]" \
  -H "Content-Type: application/json" \
  "https://storage.googleapis.com/storage/v1/b/[BUCKET_NAME]?fields=lifecycle"

Environment details
OS type and version: Windows 10
Java version: JDK 11
sdk version: google-cloud-storage-1.7.8.0.jar

Cloud Storage: Signed URL determine generation from BlobInfo

For V2: URL returned by signURL() method doesn't include parameter part: &generation=<number>, one should add it manually.
For V4: there is no way to obtain URL pointing to not the latest version of an object, adding &generation=<number> doesn't help.

As part of cloud java-storage project only V2 problem could be fixed.

V4 problem also exists in nodejs-storage: Issue googleapis/google-cloud-java#953
And very similar issue is Issue googleapis/google-cloud-java#7044
It should be fixed on the server side.

Steps to reproduce:

Create a blob with two generation:

#> gsutil versioning set on gs://bucket_generation_signed
Enabling versioning for gs://bucket_generation_signed/...
#> echo The very first version > some.txt
#> gsutil cp some.txt gs://bucket_generation_signed
Copying file://some.txt [Content-Type=text/plain]...
/ [1 files][   23.0 B/   23.0 B]                                                
Operation completed over 1 objects/23.0 B.                                       
#> echo Updated version of the file > some.txt
#> gsutil cp some.txt gs://bucket_generation_signed
Copying file://some.txt [Content-Type=text/plain]...
/ [1 files][   28.0 B/   28.0 B]                                                
Operation completed over 1 objects/28.0 B.                                       
#> gsutil ls -a gs://bucket_generation_signed/some.txt
gs://bucket_generation_signed/some.txt#1576656755290328
gs://bucket_generation_signed/some.txt#1576656816500788

Run the following code:

    public static void main(String... args) throws Exception {
        List<String> al = Arrays.asList("https://www.googleapis.com/auth/cloud-platform");
        String googleCredentialsJson = new String(Files.readAllBytes(Paths.get("bucket_generation_signed.json")));
        GoogleCredentials credentials = GoogleCredentials
                .fromStream(new ByteArrayInputStream(googleCredentialsJson.getBytes())).createScoped(al);
        Storage storage = StorageOptions.newBuilder().setCredentials(credentials).build().getService();

        String bucketName = "bucket_generation_signed";
        String blobName = "some.txt";
        long generation = 1576656755290328L;

        BlobId blobId = BlobId.of(bucketName, blobName, generation);
        Blob blob = storage.get(blobId, Storage.BlobGetOption.generationMatch());
        System.out.println("blob: " + blob);
        System.out.println("v2 " + blob.signUrl(20, TimeUnit.MINUTES, Storage.SignUrlOption.withV2Signature()));
        System.out.println("v4 " + blob.signUrl(20, TimeUnit.MINUTES, Storage.SignUrlOption.withV4Signature()));
    }

The output will look like:

blob: Blob{bucket=bucket_generation_signed, name=some.txt, generation=1576656755290328, size=23, content-type=text/plain, metadata=null}

v2 https://storage.googleapis.com/bucket_generation_signed/some.txt?GoogleAccessId=my-service-account...uJWWFsyd9cfa4FrOSmw%3D%3D

v4 https://storage.googleapis.com/bucket_generation_signed/some.txt?X-Goog-Algorithm=GOOG4-RSA-SHA256&...076f845e952a7

Both returned URLs will point to the latest version:

Updated version of the file

Despite the blob generation is explicitly specified: generation=1576656755290328

Storage: Dependency issue: Could not find androidx.annotation:annotation:1.1.0

Environment details

  1. API: Storage
  2. OS: macOS 10.15.2
  3. Java version: 1.8.0_152
  4. storage version(s): 1.104.0

Steps to reproduce

Create a gradle-based project with these (gradle 6.2):

repositories {
    mavenCentral()
    jcenter()
}

dependencies {
    implementation("com.google.cloud:google-cloud-storage:1.104.0")
}

Build it.

The build fails with

Could not find androidx.annotation:annotation:1.1.0.
     Required by:
         project :backend > com.google.cloud:google-cloud-storage:1.104.0 > com.google.api-client:google-api-client:1.30.8

Not sure if this dependency should exist in the first place. But if it should exist, then it should also be published to the repositories where google-cloud-storage is published. Or at the very least, the README should indicate if/when the dependency can be excluded and/or which repository to add to be able to use google-cloud-storage

Add delimiter BlobListOption

Currently, it's possible to use '/' as a delimiter by using BlobListOption.currentDirectory()

That's a confusing name, let's add the delimiter option and direct users to use delimiter('/') instead.

Blob.downloadTo() methods do not wrap RetryHelper$RetryHelperException

Quoting source code for one of downloadTo methods:

  /**
   * Downloads this blob to the given file path using specified blob read options.
   *
   * @param path destination
   * @param options blob read options
   * @throws StorageException upon failure
   */
  public void downloadTo(Path path, BlobSourceOption... options) {

However StorageException is never throwing, but confusing RetryHelper$RetryHelperException is throwing instead:

        String bucketName = "my-bucket";
        String blobName = "my.txt";
        BlobInfo blobInfo = BlobInfo.newBuilder(bucketName, blobName).build();
        String key = "JVzfVl8NLD9FjedFuStegjRfES5ll5zc59CIXw572OA=";

        Blob blob1 = storage.create(blobInfo, Storage.BlobTargetOption.encryptionKey(key));
        blob1.writer().write(ByteBuffer.wrap("Hello".getBytes("UTF8")));

        Blob blob2 = storage.get(blobInfo.getBlobId());
        Path tempFileTo = Files.createTempFile("my_", ".tmp");
        try {
            blob2.downloadTo(tempFileTo);
        } catch (StorageException e) {
            // nice handling
        }

Output:

Exception in thread "main" com.google.cloud.RetryHelper$RetryHelperException: com.google.cloud.storage.StorageException: 400 Bad Request
The target object is encrypted by a customer-supplied encryption key.
    at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:54)
    at com.google.cloud.storage.Blob.downloadTo(Blob.java:236)
    at com.google.cloud.storage.Blob.downloadTo(Blob.java:220)
    at com.google.cloud.storage.Blob.downloadTo(Blob.java:263)
   ...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.