Giter Club home page Giter Club logo

fog-google's Introduction

fog

fog is the Ruby cloud services library, top to bottom:

  • Collections provide a simplified interface, making clouds easier to work with and switch between.
  • Requests allow power users to get the most out of the features of each individual cloud.
  • Mocks make testing and integrating a breeze.

Build Status Code Climate Gem Version SemVer

Dependency Notice

Currently all fog providers are getting separated into metagems to lower the load time and dependency count.

If there's a metagem available for your cloud provider, e.g. fog-aws, you should be using it instead of requiring the full fog collection to avoid unnecessary dependencies.

'fog' should be required explicitly only if the provider you use doesn't yet have a metagem available.

Getting Started

The easiest way to learn fog is to install the gem and use the interactive console. Here is an example of wading through server creation for Amazon Elastic Compute Cloud:

$ sudo gem install fog
[...]

$ fog

  Welcome to fog interactive!
  :default provides [...]

>> server = Compute[:aws].servers.create
ArgumentError: image_id is required for this operation

>> server = Compute[:aws].servers.create(:image_id => 'ami-5ee70037')
<Fog::AWS::EC2::Server [...]>

>> server.destroy # cleanup after yourself or regret it, trust me
true

Ruby version

Fog requires Ruby 2.0.0 or later.

Ruby 1.8 and 1.9 support was dropped in fog-v2.0.0 as a backwards incompatible change. Please use the later fog 1.x versions if you require 1.8.7 or 1.9.x support.

Collections

A high level interface to each cloud is provided through collections, such as images and servers. You can see a list of available collections by calling collections on the connection object. You can try it out using the fog command:

>> Compute[:aws].collections
[:addresses, :directories, ..., :volumes, :zones]

Some collections are available across multiple providers:

  • compute providers have flavors, images and servers
  • dns providers have zones and records
  • storage providers have directories and files

Collections share basic CRUD type operations, such as:

  • all - fetch every object of that type from the provider.
  • create - initialize a new record locally and a remote resource with the provider.
  • get - fetch a single object by its identity from the provider.
  • new - initialize a new record locally, but do not create a remote resource with the provider.

As an example, we'll try initializing and persisting a Rackspace Cloud server:

require 'fog'

compute = Fog::Compute.new(
  :provider           => 'Rackspace',
  :rackspace_api_key  => key,
  :rackspace_username => username
)

# boot a gentoo server (flavor 1 = 256, image 3 = gentoo 2008.0)
server = compute.servers.create(:flavor_id => 1, :image_id => 3, :name => 'my_server')
server.wait_for { ready? } # give server time to boot

# DO STUFF

server.destroy # cleanup after yourself or regret it, trust me

Models

Many of the collection methods return individual objects, which also provide common methods:

  • destroy - will destroy the persisted object from the provider
  • save - persist the object to the provider
  • wait_for - takes a block and waits for either the block to return true for the object or for a timeout (defaults to 10 minutes)

Mocks

As you might imagine, testing code using Fog can be slow and expensive, constantly turning on and shutting down instances. Mocking allows skipping this overhead by providing an in memory representation of resources as you make requests. Enabling mocking is easy to use: before you run other commands, simply run:

Fog.mock!

Then proceed as usual, if you run into unimplemented mocks, fog will raise an error and as always contributions are welcome!

Requests

Requests allow you to dive deeper when the models just can't cut it. You can see a list of available requests by calling #requests on the connection object.

For instance, ec2 provides methods related to reserved instances that don't have any models (yet). Here is how you can lookup your reserved instances:

$ fog
>> Compute[:aws].describe_reserved_instances
#<Excon::Response [...]>

It will return an excon response, which has body, headers and status. Both return nice hashes.

Go forth and conquer

Play around and use the console to explore or check out fog.io and the provider documentation for more details and examples. Once you are ready to start scripting fog, here is a quick hint on how to make connections without the command line thing to help you.

# create a compute connection
compute = Fog::Compute.new(:provider => 'AWS', :aws_access_key_id => ACCESS_KEY_ID, :aws_secret_access_key => SECRET_ACCESS_KEY)
# compute operations go here

# create a storage connection
storage = Fog::Storage.new(:provider => 'AWS', :aws_access_key_id => ACCESS_KEY_ID, :aws_secret_access_key => SECRET_ACCESS_KEY)
# storage operations go here

geemus says: "That should give you everything you need to get started, but let me know if there is anything I can do to help!"

Versioning

Fog library aims to adhere to Semantic Versioning 2.0.0, although it does not address challenges of multi-provider libraries. Semantic versioning is only guaranteed for the common API, not any provider-specific extensions. You may also need to update your configuration from time to time (even between Fog releases) as providers update or deprecate services.

However, we still aim for forwards compatibility within Fog major versions. As a result of this policy, you can (and should) specify a dependency on this gem using the Pessimistic Version Constraint with two digits of precision. For example:

spec.add_dependency 'fog', '~> 1.0'

This means your project is compatible with Fog 1.0 up until 2.0. You can also set a higher minimum version:

spec.add_dependency 'fog', '~> 1.16'

Getting Help

Contributing

Please refer to CONTRIBUTING.md.

License

Please refer to LICENSE.md.

fog-google's People

Contributors

cowboyrushforth avatar dawidjanczak avatar deanputney avatar dependabot-preview[bot] avatar dependabot[bot] avatar easkay avatar emilymye avatar erjohnso avatar everlag avatar faberge-eggs avatar fog-google-bot avatar geemus avatar gscho avatar hattorious avatar icco avatar jayhsu21 avatar kgaikwad avatar lcy0321 avatar mlazarov avatar palladius avatar plribeiro3000 avatar richardwallace avatar seanmalloy avatar selmanj avatar sethboyles avatar stanhu avatar temikus avatar tumido avatar wyosotis avatar yosiat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fog-google's Issues

ACL

Hi, how do I deal with ACL while uploading ?

Ambiguity in asynchronous execution setting

While figuring out acceptance tests for vagrant-google, I found a weird logic piece in the implementation of synchronous operations in Fog.
As an example, let's take a look at Server class' destroy method:

def destroy(async=true)
  requires :name, :zone

  data = service.delete_server(name, zone_name)
  operation = Fog::Compute::Google::Operations.new(:service => service).get(data.body['name'], data.body['zone'])
  unless async
    operation.wait_for { ready? }
  end
  operation
end

The async parameter is just a true/false switch, so if we need to perform the operation synchronously (important for tests for example), we need to specify it like so:

instance.destroy(false)

Which, I find highly confusing to understand for someone who's reading the code later.
Due to the default being true, it is not easy to wrap around with a statement, since it will not make any sense either:

async_execution = false
instance.destroy(async_execution)

I was wandering - maybe it makes sense to make it a named parameter?
This will allow for:

  1. Logically sound statements:
instance.start(async: false)
  1. Ability to write in more execution flow control parameters easily if we need them.

Scope aliases consistency with gcloud

We've had a discussion with @erjohnso about service account scopes inconsistency with official gcloud utility in mitchellh/vagrant-google#71.

If one wants to define a service account scope using a short name, we have the following (left is gcloud compute, right is fog scope: attribute, since gcloud compute aliases do not match up to API endpoints:

          compute-ro         compute.readonly
          compute-rw         compute
          computeaccounts-ro computeaccounts.readonly
          computeaccounts-rw computeaccounts
          logging-write      logging.write
          sql                sqlservice
          sql-admin          sqlservice.admin
          storage-full       devstorage.full_control
          storage-ro         devstorage.read_only
          storage-rw         devstorage.read_write

Excerpt from gcloud compute instances create --help:

          Alias              URI
          bigquery           https://www.googleapis.com/auth/bigquery
          cloud-platform     https://www.googleapis.com/auth/cloud-platform
          compute-ro         https://www.googleapis.com/auth/compute.readonly
          compute-rw         https://www.googleapis.com/auth/compute
          computeaccounts-ro https://www.googleapis.com/auth/computeaccounts.readonly
          computeaccounts-rw https://www.googleapis.com/auth/computeaccounts
          datastore          https://www.googleapis.com/auth/datastore
          logging-write      https://www.googleapis.com/auth/logging.write
          monitoring         https://www.googleapis.com/auth/monitoring
          sql                https://www.googleapis.com/auth/sqlservice
          sql-admin          https://www.googleapis.com/auth/sqlservice.admin
          storage-full       https://www.googleapis.com/auth/devstorage.full_control
          storage-ro         https://www.googleapis.com/auth/devstorage.read_only
          storage-rw         https://www.googleapis.com/auth/devstorage.read_write
          taskqueue          https://www.googleapis.com/auth/taskqueue
          userinfo-email     https://www.googleapis.com/auth/userinfo.email

I was thinking about implementing it by either:

  1. Keeping an additional mapping of those shortcuts and if one matches - expanding them (Since it's probably a bad idea to have only gcloud-style mappings available in scopes, as that may break things for existing users, who expect the old-style shortcut).
    or
  2. Creating a separate scope_aliases: attribute which will accept only aliased parameters in the same format as gcloud, expanding them and passing them on to service_accounts: later.

Any thoughts on this?

P.S. This has already been done in libcloud:
https://github.com/apache/libcloud/blob/trunk/libcloud/compute/drivers/gce.py#L971
https://github.com/apache/libcloud/blob/trunk/libcloud/compute/drivers/gce.py#L4677

How to perform CI testing with integration tests?

Per #18.

I've written an integration test framework and suite of tests that work with a live integration setup, and we need to figure out how to get those tests to not fail on Travis CI. We need these tests run regularly to make sure that the library actually works against the current API. Options I can think of:

  1. We could upload the cassettes as part of the codebase. This is contrary to fog/fog#2112, and the problems are numerous. The two big ones are:
    • We'll have to commit VCR cassettes into the codebase. That's a huge pain, and creates messy commits.
    • We or someone else might inadvertently commit sensitive information embedded in the cassettes, (e.g. authentication information).
  2. We can link tests to mocks. This is also not a great solution, since it means:
    • We have to keep the mocks up-to-date
    • We might get the CI test passing when it should fail against the live API. This is how we got to where we are now, where we don't know how much of the codebase still works and how much is built against an old API spec.
  3. We can run tests against a live project. This also has numerous issues:
    • If a test fails, it's (very) hard to clean up after, (i.e. it may have created resources that it didn't delete, so we might end up with VMs or other things lying around which will cause problems with future tests and also rack up costs for anyone else testing this stuff).
    • It means putting credentials for a project up on our Travis CI server. I don't know how secure that is.
    • They take a very long time to run, (at least 30 minutes,) and they aren't particularly consistent, (e.g. a bad connection can make a whole suite fail for unclear reasons).
  4. We can not run integration tests on Travis CI, and run them in some other environment, where we can store credentials. The brain-dead solution would be to just have me, (or someone else,) run them locally at every update. This is neither transparent nor sustainable.

@plribeiro3000 Your thoughts would be helpful here. Does the Fog community, as far as you know, have working solutions to this problem?

@erjohnso Do you have suggestions, based on how other projects are doing this?

Meta tests in `fog-google`

@plribeiro3000 Thanks again for doing the extraction.

I'm getting ready to cut a gem for fog-google, but I wanted to make sure all of the tests are passing properly and that we're not missing any components.

When I run the tests inside fog-google versus fog, (using sh("export FOG_MOCK=#{mock} && bundle exec shindont tests/google")), pretty much everything looks the same, except that the fog-google test suite seems to have following two lines that the fog tests don't, with 48 tests missing:

Fog::Schema::DataValidator (meta) +++++++++++++++++++++++
test_helper (meta) +++++++++++++++++++++++++

Should these tests be here? Seems weird to me that they are. It may very well be an artifact of other weird things that are going on.

multiple directories : one for each engine

Hello, is it possible to have a different fog configuration for each engine mounted on the main application ?
typically I'd like each engine to upload to its own bucket.

Update CONTRIBUTORS.md

We gotta keep up the CONTRIBUTORS.md file updated.

@geemus has a tool for that (osrcry) but it does not help much here because we already have a ton of contributors from fog that does not have commit here and since the tool works on top of git, it will just drop all of them. What i'm doing so far is update it by hand but i guess that a provider of the size of this one can't keep it going like this.

Perhaps we might send some patches to @geemus. =)

Implement SCRATCH disk type

Is it possible to create a SCRATCH disk instead of PERSISTENT, I've looked through the source and it seems that PERSISTENT is hard coded into everything.

Shindo should be replaced with minitest

Per fog/fog#1266 and fog/fog#2630, Minitest is the new testing framework to be used by fog projects.

  • #51 is open to cover compute in integration tests.
  • Nothing is unit-tested with minitest; it should be. Opening #49.
  • Furthermore, it's not clear whether or not we can run these integration tests in Travis CI, (see #19). Regardless, unit tests should be run in Travis.
  • #51 only covers compute. dns, monitoring, sql, and storage all remain tested in under shindo, and should be moved over to minitest.
  • The Rakefile is kind of a mess, and should eventually be cleaned up.

v0.0.6 source and tag

rubygems.org has a 0.0.6 version published as of yesterday, but this repo doesn't seem to contain a tag or corresponding source code (lib/fog/google/version.rb says 0.0.5).

Implement better examples

Сurrently all fog-google examples are essentially code snippets:
Take https://github.com/fog/fog-google/blob/master/examples/image_all.rb as an example:

def test
  connection = Fog::Compute.new({ :provider => "Google" })

  # If this doesn't raise an exception, everything is good.
  connection.images.all
end

If someone wants to fix something quick, he needs to first set up a development environment, which potentially deters quick changes to the library and getting started easily.

What I propose is:

  1. Briefly describe in the README to get .fogrc working, as in:
default:
    google_project: "my-awesome-project"
    google_client_email: "[email protected]"
    google_json_key_location: "/tmp/gce-key.json"
  1. Adding development dependencies to a separate group in Gemfile, for example, for latest git versions of fog and fog-core:
group :development do
   gem 'fog-core', git: "https://github.com/fog/fog-core"
   gem 'fog', git: "https://github.com/fog/fog"
   gem 'fog-google', path: "."
   gem 'fog-json'
end
  1. With that set up we just need to add 4 lines to any example:
require 'bundler'
Bundler.require(:default, :development)
# Comment this if you don't want to make real requests to GCE
WebMock.disable!

And voila! It becomes an actual working script:

temikus λ cat image_all.rb
require 'bundler'
Bundler.require(:default, :development)
# Comment this if you don't want to make real requests to GCE
WebMock.disable!

def test
  connection = Fog::Compute.new({ :provider => "Google" })

  # If this doesn't raise an exception, everything is good.
  connection.images.all
end

images = test
pp test

temikus λ ruby image_all.rb >> log
warning: parser/current is loading parser/ruby21, which recognizes
warning: 2.1.6-compliant syntax, but you are running 2.1.5.
[  <Fog::Compute::Google::Image
    name="centos-6-v20131120",
    id="11748647391859510935",
    kind="compute#image",
    archive_size_bytes="269993565",
    creation_timestamp="2013-11-25T15:13:50.611-08:00",
    deprecated={"state"=>"DEPRECATED", "replacement"=>"https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-6-v20150226"},
    description="SCSI-enabled CentOS 6 built on 2013-11-20",
    disk_size_gb="10",
    self_link="https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-6-v20131120",
    source_type="RAW",
    status="READY",
    project="centos-cloud",
    raw_disk={"source"=>"", "containerType"=>"TAR"}
...

This is just a suggestion, but IMO this will save a bunch of time for anyone who is just starting to work on this library. If you don't want any changes in Gemfile/examples, then maybe just a change in the README/CONTRIBUTING?

Let me know what you think.

Incorrect arguments for some operations.get calls causes exceptions

The recent update to google/models/compute/target_pool.save and google/models/compute/forwarding_rule.save pass operations.get incorrect arguments which in turn throws an error when wait_for is called on a nil object. My colleague or I will be submitting a pull request soon to fix this

Obsolete development dependencies

Currently, fog-google.gemspec says that fog-google relies on pry, vcr, webmock, coveralls, and rubocop as development dependencies. Should some of these be removed?

Cut new Gem to adjust for Google DNS API v.1 deprecation

v1beta1 will be deprecated very soon (docs state cutoff on Sept 25th)

We already had the support proposed in: #64
This needs to be acceptance tested but I don't quite know what's the current setup is.

Is master stable enough to start cutting out a new gem?
If not - what is needed to push this through?

/CC @ihmccreery who has access to Jenkins.

Accessing url is slow

So I'm having a problem where If I try to access say image.url it takes a little too long. For example calling url on 100 images takes around 28 seconds when using Google Storage but using S3 with the same function call takes around half a second.

This might be related to this CarrierWave issue. However, the problem they were having was for both Google Storage and S3. @geemus mentioned in that issue to use public_url instead but this is still slow compared to using fog + S3.

This might not be a fog issue but a Google Storage issue.

As a workaround I'm currently just hardcoding the url to my Google Storage API and appending image.path to that - much faster.

CI Linter introduction

Are you planning on introducing a linter into the project? (e.g. rubocop)
Can save you some cycles on PR reviews if it's tied to travis, also, fix some style guide violations that were overlooked/inherited.

Allow Range header to work with Google Cloud Storage

The gem seems to support the http Range: header in requests, however it doesn't accept the 206 response from google, which is required. (206 = Partial Content)

The following PR adds basic support for accepting 206 Partial Content, allowing the Range header to be used to get objects from GCS:

#106

Support listing >1000 file directories via Fog::Storage::GoogleJSON

I am connecting to Google Cloud Storage where I have 20,000+ records, but when I connect to the bucket with Fog and list all files I only get 1000. Is there a way to increase the number of files returned? I was hoping to programmatically go through all of these and make modifications, but now I'm stuck. Anyone? Thanks and great work on this Gem. 😄

Release 0.1.1

I'm gonna bundle up 0.1.1 to go into Fog's 2.0 release. This drops support for all versions of Ruby < 2.0.

Bootstrap method should look gcloud ssh keys

Currently, the live bootstrap test assumes the user has ~/.ssh/id_rsa[.pub] for ssh keys. Google users will typically have ~/.ssh/google_compute_engine[.pub]. In that case, ssh will "Just Work"(tm), so the suggestion is for the live bootstrap test to first try to use a the google key, then could fall back to id_rsa.

wdyt @ihmccreery?

Struggling to authenticate with service account (email & key)

I keep getting Missing required arguments: google_storage_access_key_id, google_storage_secret_access_key. I understand that I am supposed to put my credential "in a /.fog" file, but I don't quite understand how that's supposed to work in the context of a Rails app. Can someone elaborate on how to configure this? I have tried passing the settings in an initializer (as suggested here), but they don't seem to get recognized in the validate_options method.

config/initializers/fog.rb

GoogleStorage = Fog::Storage.new(
  provider: 'Google',
  google_project: 'xxxxxxxxxxxxx',
  google_client_email: '[email protected]',
  google_key_location: 'private/google-cloud-service-key.p12'
)

Error in Ruby's rescue clause

In the following file/commit/line:

1976f5c#commitcomment-10822546

Causes (at least) the following error:

/.../ruby/2.2.0/gems/fog-google-0.0.3/lib/fog/google/models/compute/images.rb:67:in `rescue in get': class or module required for rescue clause (TypeError)
    from /.../ruby/2.2.0/gems/fog-google-0.0.3/lib/fog/google/models/compute/images.rb:47:in `get'
    from /.../ruby/2.2.0/gems/fog-google-0.0.3/lib/fog/google/requests/compute/insert_disk.rb:77:in `insert_disk'
    from /.../ruby/2.2.0/gems/fog-google-0.0.3/lib/fog/google/models/compute/disk.rb:40:in `save'
    from /.../ruby/2.2.0/gems/fog-core-1.30.0/lib/fog/core/collection.rb:51:in `create'
    from ...

Reconcile fog-google and fog/lib/fog/google

Right now, fog-google and fog/lib/fog/google codebases don't reference each other. fog-google is (kind of) under development, but fog doesn't pull those changes in.

We need to freeze one of these codebases to prevent having changes to both fog-google and fog/lib/fog/google that are difficult to merge. I propose we either:

  1. freeze fog-google, develop in fog/lib/fog/google to the point where we're confident that the code is not broken and properly tested, then pull that codebase into fog-google; or
  2. freeze fog/lib/fog/google, pull it over to fog-google, (pretty much already done,) and deprecate fog/lib/fog/google so that all development is happening in fog-google.

I think the first option will be the easier one, and will have the highest probability of not exploding in our faces. Transferring the whole codebase, where a lot of it is of unknown correctness, could be dangerous. However, the first option goes back on the current trajectory, and also might mean that pulling that codebase into fog-google later will be more painful.

Models and requests should be unit tested in minitest

Right now, the shindo tests are backed by mocks in the codebase, which don't provide any assurance that the code actually works, only that it is internally consistent. This has led to drift away from the API as code was written and tested against the mocks, but not continually tested against the live API. In turn, the mocks are hard to keep up-to-date.

For some resources, resource#get(nil) returns a resource

I would expect that, for example, Fog::Compute[:google].servers.get(nil) should either return nil or raise an error. Instead, if at least one server exists in my project,

> Fog::Compute[:google].servers.get(nil)
=> <Fog::Compute::Google::Server
      name="server-name",
      ...
    >

This is because of the way we find servers, disks, and other resources that are zoned/regioned. For example:

servers = service.list_aggregated_servers(:filter => "name eq .*#{identity}").body['items']

Every server matches that filter if identity is nil.

Image create example is broken

I'm using https://github.com/fog/fog/blob/master/lib/fog/google/examples/image_create.rb as a template. Specifically, at least two different issues:

First, connection.image.create should be connection.images.create -- trivial fix

Second, connection.images.create fails with:

/home/diwaker/.rvm/gems/ruby-2.0.0-p195@mgt_console/gems/fog-1.23.0/lib/fog/google/compute.rb:179:in `build_excon_response': Invalid value for field 'image.hasRawDisk': 'false'.  (Fog::Errors::Error)
        from /home/diwaker/.rvm/gems/ruby-2.0.0-p195@mgt_console/gems/fog-1.23.0/lib/fog/google/compute.rb:959:in `build_response'
        from /home/diwaker/.rvm/gems/ruby-2.0.0-p195@mgt_console/gems/fog-1.23.0/lib/fog/google/requests/compute/get_global_operation.rb:21:in `get_global_operation'
        from /home/diwaker/.rvm/gems/ruby-2.0.0-p195@mgt_console/gems/fog-1.23.0/lib/fog/google/models/compute/operations.rb:27:in `get'
        from /home/diwaker/.rvm/gems/ruby-2.0.0-p195@mgt_console/gems/fog-1.23.0/lib/fog/google/models/compute/image.rb:79:in `save'
        from /home/diwaker/.rvm/gems/ruby-2.0.0-p195@mgt_console/gems/fog-core-1.23.0/lib/fog/core/collection.rb:51:in `create'

servers::bootstrap should use disk/autoDelete

Right now, bootstrapped servers automatically create a disk to use with the server, but do not automatically destroy that disk when the server is destroyed. I propose we use the disk/autoDelete option when creating bootstrapped instances, to avoid orphaned disks, (more info here).

@icco You know this codebase better than I do; objections? Thanks.

Exception raised in #get_target_pool_health when instance is terminated

Google instances can be terminated, yet still in a Target Pool. A Fog::Errors::Error exception is raised as 'resource is not ready', which prevents you from getting health for all other instances in that Target Pool.

Here's how I'm calling Target Pool #get_health

  if (t = load_balancers.target_pools.get(n)) && t.get_health.any?
          Hash[*t.get_health.map{|i, h|
                 i = i.split_link if split
                 [i, {:state => h.first['healthState'], :ip_address => h.first['ipAddress'] }]
               }.flatten]
  end

And the backtrace

#<Fog::Errors::Error: The resource 'projects/<PROJECT>/zones/us-central1-b/instances/<INSTANCE>' is not ready>
/Users/dacamp/.rvm/gems/ruby-2.0.0-p481/gems/fog-1.23.0/lib/fog/google/compute.rb:179:in `build_excon_response'
/Users/dacamp/.rvm/gems/ruby-2.0.0-p481/gems/fog-1.23.0/lib/fog/google/compute.rb:959:in `build_response'
/Users/dacamp/.rvm/gems/ruby-2.0.0-p481/gems/fog-1.23.0/lib/fog/google/requests/compute/get_target_pool_health.rb:21:in `block in get_target_pool_health'
/Users/dacamp/.rvm/gems/ruby-2.0.0-p481/gems/fog-1.23.0/lib/fog/google/requests/compute/get_target_pool_health.rb:19:in `map'
/Users/dacamp/.rvm/gems/ruby-2.0.0-p481/gems/fog-1.23.0/lib/fog/google/requests/compute/get_target_pool_health.rb:19:in `get_target_pool_health'
/Users/dacamp/.rvm/gems/ruby-2.0.0-p481/gems/fog-1.23.0/lib/fog/google/models/compute/target_pool.rb:79:in `get_health'

Fog::DNS::Google::Records example is broken

get method in Fog::DNS::Google::Records supports only positional arguments

connection.records.get is structured like so:

def get(name, type)
  requires :zone

  records = service.list_resource_record_sets(zone.identity, { :name => name, :type => type }).body['rrsets'] || []
  records.any? ? new(records.first) : nil
rescue Fog::Errors::NotFound
  nil
end

, however, our example lists:

record = connection.records(zone: zone).get(name: 'tessts.example.org.',type: 'A')

, which leads to argument errors:

> record = connection.records(zone: zone).get(name: 'tessts.example.org.',type: 'A')
ArgumentError: wrong number of arguments (1 for 2)

Should I:

  • Fix the example?
    or
  • Fix the method to accept an options hash? ( Ruby 2.0 doesn't support required named arguments 😞 )

/CC @plribeiro3000 @icco

ArgumentError ( is not a recognized provider): on Heroku, using paperclip and fog

screen shot 2015-10-13 at 4 15 27 pm

This works locally on dev machine, doesn't work on heroku though. Does anyone have any thoughts on this?

User Model

has_attached_file   :avatar,
                      styles: {:big => "200x200>", thumb: "50x50>"},
                      storage: :fog,
                      fog_credentials: "#{Rails.root}/config/gce.yml",
                      fog_directory: "google-bucket-name"

  validates_attachment_content_type :avatar, content_type: /\Aimage\/.*\Z/

Gemfile

gem "paperclip", git: "git://github.com/thoughtbot/paperclip.git"
gem 'fog'

Improve authentication mechanisms

google/storage uses a legacy Amazon-compatible authentication system that still works, but has some limitations and requires some hackery to get working in a non-trivial case. It looks for the parameters :google_storage_access_key_id and :google_storage_secret_access_key

google/compute embraces the newer service account model, and accepts :google_project, google_client_email, :google_key_location, :google_key_string and :google_client

Instances provisioned on Google Compute Engine can be authorized at launch time with service_account_scopes, which preauthorize the instance on various Google OAuth scopes, e.g.: https://www.googleapis.com/auth/devstorage.full_control -- once this is done, a GET query to the Google metadata server from that instance will return a valid token for the service for that instance scoped to its own project -- no other service accounts required.

I would propose:

  1. expanding google/storage's vocabulary to accept the same service account parameters as google/compute

  2. expanding the vocabulary of google/compute to allow service_account_scopes to be set at instance launch time

  3. adding a parameter to both google/compute and google/storage to attempt using an OAuth token from the metadata service if fog is running on a preauthorized instance

This would allow a fog user to provision a Compute Engine node using fog and a provisioning service account, preauthorize that node to connect to Cloud Storage (and/or other Google OAuth scopes), and then have that node be able to run and interact with Cloud Storage, Datastore, etc. without needing to be issued its own unique service account.

I can work on this and it doesn't look too terribly difficult, but I haven't contributed to fog before and this is really my first time looking at its internals. Before I waste too much effort, does this all sound worthwhile, and is there anyone actively maintaining the google stuff that I can coordinate with?

attribute names should be standardized

For example, Server has the following format for attributes, (this seems to be the preferred format):

attribute :can_ip_forward, :aliases => 'canIpForward'
attribute :creation_timestamp, :aliases => 'creationTimestamp'

whereas UrlMap has the following format:

attribute :fingerprint, :aliases => 'fingerprint'
attribute :hostRules, :aliases => 'host_rules'

It's worth noting that the format that UrlMap provides does not allow :underscored_symbol notation as it stands now, (symbol keys are not automatically converted to or checked against strings when passing parameters around).

HTTPS link

config.asset_host     = 'https://assets.example.com' 

doesn't work in CarrierWave.configure

How we can to show carrierwave use ssl versiton in url method?

Add support for custom vm's

Custom VM's got released, would be a good thing to support them.

This shouldn't be extremely hard, since API is very simple, one just needs to specify a custom machineType value:

zones/ZONE/machineTypes/custom-NUMBER_OF_CPUS-AMOUNT_OF_MEMORY

Since machineType is just pasted in as a string in our case, we may just need to verify that it works.

More info here:
http://googlecloudplatform.blogspot.sg/2015/11/introducing-Custom-Machine-Types-the-freedom-to-configure-the-best-VM-shape-for-your-workload.html#gpluscomments
https://cloud.google.com/compute/docs/instances/creating-instance-with-custom-machine-type

Better errors for incorrect service_accounts

If one states incorrect service_accounts field in server parameters, for example:

  server = connection.servers.create(defaults = {
    :name => "fog-smoke-test-#{Time.now.to_i}",
    :disks => [disk],
    :machine_type => "n1-standard-1",
    :private_key_path => File.expand_path("~/.ssh/id_rsa"),
    :public_key_path => File.expand_path("~/.ssh/id_rsa.pub"),
    :zone_name => "europe-west1-b",
    :user => ENV['USER'],
    :tags => ["fog"],
    :service_accounts => [ 'foo', 'bar', 'baz' ],
  })

, we get a very ambiguous error back:

/Users/temikus/Code/vagrant-dev/fog-google/lib/fog/google.rb:222:in `build_excon_response': Code: 'CghJTlNUQU5DRRImNzY4MDk5NTM1NzQxLmZvZy1zbW9rZS10ZXN0LTE0MzMxNDcxMzM=' (Fog::Errors::Error)
    from /Users/temikus/Code/vagrant-dev/fog-google/lib/fog/google.rb:193:in `request'
    from /Users/temikus/Code/vagrant-dev/fog-google/lib/fog/google/requests/compute/get_zone_operation.rb:50:in `get_zone_operation'
    from /Users/temikus/Code/vagrant-dev/fog-google/lib/fog/google/models/compute/operations.rb:23:in `get'
    from /Users/temikus/Code/vagrant-dev/fog-google/lib/fog/google/models/compute/operation.rb:63:in `reload'
    from /Users/temikus/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/bundler/gems/fog-core-d117cd252d28/lib/fog/core/model.rb:70:in `block in wait_for'
    from /Users/temikus/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/bundler/gems/fog-core-d117cd252d28/lib/fog/core/wait_for.rb:7:in `block in wait_for'
    from /Users/temikus/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/bundler/gems/fog-core-d117cd252d28/lib/fog/core/wait_for.rb:6:in `loop'
    from /Users/temikus/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/bundler/gems/fog-core-d117cd252d28/lib/fog/core/wait_for.rb:6:in `wait_for'
    from /Users/temikus/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/bundler/gems/fog-core-d117cd252d28/lib/fog/core/model.rb:69:in `wait_for'
    from /Users/temikus/Code/vagrant-dev/fog-google/lib/fog/google/models/compute/server.rb:280:in `save'
    from /Users/temikus/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/bundler/gems/fog-core-d117cd252d28/lib/fog/core/collection.rb:51:in `create'
    from example_create.rb:22:in `test'
    from example_create.rb:58:in `<main>'

, where encoded string CghJTlNUQU5DRRImNzY4MDk5NTM1NzQxLmZvZy1zbW9rZS10ZXN0LTE0MzMxNDcxMzM= is just the instance name: INSTANCE&768099535741.fog-smoke-test-1433147133

Maybe we should add a bit of verbosity to it? At least "instance config rejected" or something?

What do you think?

get and other business logic should be DRY-ed up in the models

This issue is coming from the pain point that different resources behave differently when #get('nonexistent-identity') is called:

  • for Addresses#get, Servers#get, and others, if the resource isn't found, it returns nil, (this seems to be the preferred behavior,) whereas
  • for UrlMaps#get, TargetHttpProxies#get, and others, if the resource isn't found, it throws a Fog::Errors::NotFound.

This particular issue has been patched up in ikehz/fog-google@4e1d5dd and others, but it should be solved more permanently by DRY-ing up the duplicated business logic, (as well as implementing more consistent tests).

It's worth noting that these are breaking changes, but they are minor enough that I'm willing to put them in v0.1, though I'm happy to hear dissent. It will require some serious workarounds to properly test, (per the work I've been doing moving to Minitest,) if we decide not to change the behavior until v1.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.