Giter Club home page Giter Club logo

azurite's Introduction

Azurite V3

npm version Build Status

Note: The latest Azurite V3 code, which supports Blob, Queue, and Table (preview) is in the main branch. The legacy Azurite V2 code is in the legacy-master branch.

Version Azure Storage API Version Service Support Description Reference Links
3.31.0 2024-08-04 Blob, Queue and Table(preview) Azurite V3 based on TypeScript & New Architecture NPM - Docker - Visual Studio Code Extension
Legacy (v2) 2016-05-31 Blob, Queue and Table Legacy Azurite V2 NPM

Introduction

Azurite is an open source Azure Storage API compatible server (emulator). Based on Node.js, Azurite provides cross platform experiences for customers wanting to try Azure Storage easily in a local environment. Azurite simulates most of the commands supported by Azure Storage with minimal dependencies.

Azurite V2 is manually created with pure JavaScript, popular and active as an open source project. However, Azure Storage APIs are growing and keeping updating, manually keeping Azurite up to date is not efficient and prone to bugs. JavaScript also lacks strong type validation which prevents easy collaboration.

Compared to V2, Azurite V3 implements a new architecture leveraging code generated by a TypeScript Server Code Generator we created. The generator uses the same swagger (modified) used by the new Azure Storage SDKs. This reduces manual effort and facilitates better code alignment with storage APIs.

3.0.0-preview is the first release version using Azurite's new architecture.

Features & Key Changes in Azurite V3

  • Blob storage features align with Azure Storage API version 2024-08-04 (Refer to support matrix section below)
    • SharedKey/Account SAS/Service SAS/Public Access Authentications/OAuth
    • Get/Set Blob Service Properties
    • Create/List/Delete Containers
    • Create/Read/List/Update/Delete Block Blobs
    • Create/Read/List/Update/Delete Page Blobs
  • Queue storage features align with Azure Storage API version 2024-08-04 (Refer to support matrix section below)
    • SharedKey/Account SAS/Service SAS/OAuth
    • Get/Set Queue Service Properties
    • Preflight Request
    • Create/List/Delete Queues
    • Put/Get/Peek/Update/Delete/Clear Messages
  • Table storage features align with Azure Storage API version 2024-08-04 (Refer to support matrix section below)
    • SharedKey/Account SAS/Service SAS/OAuth
    • Create/List/Delete Tables
    • Insert/Update/Query/Delete Table Entities
  • Features NEW on V3
    • Built with TypeScript and ECMA native promise and async features
    • New architecture based on TypeScript server generator. Leverage auto generated protocol layer, models, serializer, deserializer and handler interfaces from REST API swagger
    • Flexible structure and architecture, supports customizing handler layer implementation, persistency layer implementation, HTTP pipeline middleware injection
    • Detailed debugging log support, easy bug locating and reporting
    • Works with storage .Net SDK basic and advanced sample
    • SharedKey, AccountSAS, ServiceSAS, OAuth, Public Access authentication support
    • Keep updating with latest Azure Storage API version features (Refer to support matrix)

Getting Started

Try with any of following ways to start an Azurite V3 instance.

GitHub

After cloning source code, execute following commands to install and start Azurite V3.

npm ci
npm run build
npm install -g
azurite

NPM

In order to run Azurite V3 you need Node.js installed on your system. Azurite works cross-platform on Windows, Linux, and OS X. Azurite is compatible with the current Node.Js LTS Versions in support.

After installation you can install Azurite simply with npm which is the Node.js package management tool included with every Node.js installation.

npm install -g azurite

Simply start it with the following command:

azurite -s -l c:\azurite -d c:\azurite\debug.log

or,

azurite --silent --location c:\azurite --debug c:\azurite\debug.log

This tells Azurite to store all data in a particular directory c:\azurite. If the -l option is omitted it will use the current working directory. You can also selectively start different storage services.

For example, to start blob service only:

azurite-blob -l path/to/azurite/workspace

Start queue service only:

azurite-queue -l path/to/azurite/workspace

Start table service only:

azurite-table -l path/to/azurite/workspace

Visual Studio Code Extension

Azurite V3 can be installed from Visual Studio Code extension market.

You can quickly start or close Azurite by clicking Azurite status bar item or following commands.

Extension supports following Visual Studio Code commands:

  • Azurite: Start Start all Azurite services
  • Azurite: Close Close all Azurite services
  • Azurite: Clean Reset all Azurite services persistency data
  • Azurite: Start Blob Service Start blob service
  • Azurite: Close Blob Service Close blob service
  • Azurite: Clean Blob Service Clean blob service
  • Azurite: Start Queue Service Start queue service
  • Azurite: Close Queue Service Close queue service
  • Azurite: Clean Queue Service Clean queue service
  • Azurite: Start Table Service Start table service
  • Azurite: Close Table Service Close table service
  • Azurite: Clean Table Service Clean table service

Following extension configurations are supported:

  • azurite.blobHost Blob service listening endpoint, by default 127.0.0.1
  • azurite.blobPort Blob service listening port, by default 10000
  • azurite.queueHost Queue service listening endpoint, by default 127.0.0.1
  • azurite.queuePort Queue service listening port, by default 10001
  • azurite.tableHost Table service listening endpoint, by default 127.0.0.1
  • azurite.tablePort Table service listening port, by default 10002
  • azurite.location Workspace location folder path (can be relative or absolute). By default, in the VS Code extension, the currently opened folder is used. If launched from the command line, the current process working directory is the default. Relative paths are resolved relative to the default folder.
  • azurite.silent Silent mode to disable access log in Visual Studio channel, by default false
  • azurite.debug Output debug log into Azurite channel, by default false
  • azurite.loose Enable loose mode which ignores unsupported headers and parameters, by default false
  • azurite.cert Path to a PEM or PFX cert file. Required by HTTPS mode.
  • azurite.key Path to a PEM key file. Required when azurite.cert points to a PEM file.
  • azurite.pwd PFX cert password. Required when azurite.cert points to a PFX file.
  • azurite.oauth OAuth oauthentication level. Candidate level values: basic.
  • azurite.skipApiVersionCheck Skip the request API version check, by default false.
  • azurite.disableProductStyleUrl Force parsing storage account name from request Uri path, instead of from request Uri host.
  • azurite.inMemoryPersistence Disable persisting any data to disk. If the Azurite process is terminated, all data is lost.
  • azurite.extentMemoryLimit When using in-memory persistence, limit the total size of extents (blob and queue content) to a specific number of megabytes. This does not limit blob, queue, or table metadata. Defaults to 50% of total memory.

Run Azurite V3 docker image

Note. Find more docker images tags in https://mcr.microsoft.com/v2/azure-storage/azurite/tags/list

docker run -p 10000:10000 -p 10001:10001 -p 10002:10002 mcr.microsoft.com/azure-storage/azurite

-p 10000:10000 will expose blob service's default listening port. -p 10001:10001 will expose queue service's default listening port. -p 10002:10002 will expose table service's default listening port.

Or just run blob service:

docker run -p 10000:10000 mcr.microsoft.com/azure-storage/azurite azurite-blob --blobHost 0.0.0.0

Run Azurite V3 docker image with customized persisted data location

docker run -p 10000:10000 -p 10001:10001 -v c:/azurite:/data mcr.microsoft.com/azure-storage/azurite

-v c:/azurite:/data will use and map host path c:/azurite as Azurite's workspace location.

Customize all Azurite V3 supported parameters for docker image

docker run -p 7777:7777 -p 8888:8888 -p 9999:9999 -v c:/azurite:/workspace mcr.microsoft.com/azure-storage/azurite azurite -l /workspace -d /workspace/debug.log --blobPort 7777 --blobHost 0.0.0.0 --queuePort 8888 --queueHost 0.0.0.0 --tablePort 9999 --tableHost 0.0.0.0 --loose --skipApiVersionCheck --disableProductStyleUrl

Above command will try to start Azurite image with configurations:

-l //workspace defines folder /workspace as Azurite's location path inside docker instance, while /workspace is mapped to c:/azurite in host environment by -v c:/azurite:/workspace

-d //workspace/debug.log enables debug log into /workspace/debug.log inside docker instance. debug.log will also mapped to c:/azurite/debug.log in host machine because of docker volume mapping.

--blobPort 7777 makes Azurite blob service listen to port 7777, while -p 7777:7777 redirects requests from host machine's port 7777 to docker instance.

--blobHost 0.0.0.0 defines blob service listening endpoint to accept requests from host machine.

--queuePort 8888 makes Azurite queue service listen to port 8888, while -p 8888:8888 redirects requests from host machine's port 8888 to docker instance.

--queueHost 0.0.0.0 defines queue service listening endpoint to accept requests from host machine.

--tablePort 9999 makes Azurite table service listen to port 9999, while -p 9999:9999 redirects requests from host machine's port 9999 to docker instance.

--tableHost 0.0.0.0 defines table service listening endpoint to accept requests from host machine.

--loose enables loose mode which ignore unsupported headers and parameters.

--skipApiVersionCheck skip the request API version check.

--disableProductStyleUrl force parsing storage account name from request Uri path, instead of from request Uri host.

If you use customized azurite parameters for docker image, --blobHost 0.0.0.0, --queueHost 0.0.0.0 are required parameters.

In above sample, you need to use double first forward slash for location and debug path parameters to avoid a known issue for Git on Windows.

Will support more release channels for Azurite V3 in the future.

Docker Compose

To run Azurite in Docker Compose, you can start with the following configuration:

---
version: "3.9"
services:
  azurite:
    image: mcr.microsoft.com/azure-storage/azurite
    container_name: "azurite"
    hostname: azurite
    restart: always
    ports:
      - "10000:10000"
      - "10001:10001"
      - "10002:10002"

NuGet

Releasing Azurite V3 to NuGet is under investigation.

Visual Studio

Integrate Azurite with Visual Studio is under investigation.

Supported Command Line Options

Listening Host Configuration

Optional. By default, Azurite V3 will listen to 127.0.0.1 as a local server. You can customize the listening address per your requirements.

Only Accept Requests in Local Machine

--blobHost 127.0.0.1
--queueHost 127.0.0.1
--tableHost 127.0.0.1

Allow Accepting Requests from Remote (potentially unsafe)

--blobHost 0.0.0.0
--queueHost 0.0.0.0
--tableHost 0.0.0.0

Listening Port Configuration

Optional. By default, Azurite V3 will listen to 10000 as blob service port, and 10001 as queue service port, and 10002 as the table service port. You can customize the listening port per your requirements.

Warning: After using a customized port, you need to update connection string or configurations correspondingly in your Storage Tools or SDKs. If starting Azurite you see error Error: listen EACCES 0.0.0.0:10000 the TCP port is most likely already occupied by another process.

Customize Blob/Queue Service Listening Port

--blobPort 8888
--queuePort 9999
--tablePort 11111

Let System Auto Select an Available Port

--blobPort 0
--queuePort 0
--tablePort 0

Note: The port in use is displayed on Azurite startup.

Workspace Path Configuration

Optional. Azurite V3 needs to persist metadata and binary data to local disk during execution.

You can provide a customized path as the workspace location, or by default, Current process working directory will be used.

-l c:\azurite
--location c:\azurite

Access Log Configuration

Optional. By default Azurite will display access log in console. Disable it by:

-s
--silent

Debug Log Configuration

Optional. Debug log includes detailed information on every request and exception stack traces. Enable it by providing a valid local file path for the debug log destination.

-d path/debug.log
--debug path/debug.log

Loose Mode Configuration

Optional. By default Azurite will apply strict mode. Strict mode will block unsupported request headers or parameters. Disable it by enabling loose mode:

-L
--loose

Certificate Configuration (HTTPS)

Optional. By default Azurite will listen on HTTP protocol. Provide a PEM or PFX certificate file path to enable HTTPS mode:

--cert path/server.pem

When --cert is provided for a PEM file, must provide coresponding --key.

--key path/key.pem

When --cert is provided for a PFX file, must provide coresponding --pwd

--pwd pfxpassword

OAuth Configuration

Optional. By default, Azurite doesn't support OAuth and bearer token. Enable OAuth authentication for Azurite by:

--oauth basic

Note. OAuth requires HTTPS endpoint. Make sure HTTPS is enabled by providing --cert parameter along with --oauth parameter.

Currently, Azurite supports following OAuth authentication levels:

Basic

In basic level, --oauth basic, Azurite will do basic authentication, like validating incoming bearer token, checking issuer, audience, expiry. But Azurite will NOT check token signature and permission.

Skip API Version Check

Optional. By default Azurite will check the request API version is valid API version. Skip the API version check by:

--skipApiVersionCheck

Disable Product Style Url

Optional. When using FQDN instead of IP in request Uri host, by default Azurite will parse storage account name from request Uri host. Force parsing storage account name from request Uri path by:

--disableProductStyleUrl

Use in-memory storage

Optional. Disable persisting any data to disk and only store data in-memory. If the Azurite process is terminated, all data is lost. By default, LokiJS persists blob and queue metadata to disk and content to extent files. Table storage persists all data to disk. This behavior can be disabled using this option. This setting is rejected when the SQL based metadata implementation is enabled (via AZURITE_DB). This setting is rejected when the --location option is specified.

--inMemoryPersistence

By default, the in-memory extent store (for blob and queue content) is limited to 50% of the total memory on the host machine. This is evaluated to using os.totalmem(). This limit can be overridden using the --extentMemoryLimit <megabytes> option. There is no restriction on the value specified for this option but virtual memory may be used if the limit exceeds the amount of available physical memory as provided by the operating system. A high limit may eventually lead to out of memory errors or reduced performance.

As blob or queue content (i.e. bytes in the in-memory extent store) is deleted, the memory is not freed immediately. Similar to the default file-system based extent store, both the blob and queue service have an extent garbage collection (GC) process. This process is in addition to the standard Node.js runtime GC. The extent GC periodically detects unused extents and deletes them from the extent store. This happens on a regular time period rather than immediately after the blob or queue REST API operation that caused some content to be deleted. This means that process memory consumed by the deleted blob or queue content will only be released after both the extent GC and the runtime GC have run. The extent GC will remove the reference to the in-memory byte storage and the runtime GC will free the unreferenced memory some time after that. The blob extent GC runs every 10 minutes and the queue extent GC runs every 1 minute.

The queue and blob extent storage count towards the same limit. The --extentMemoryLimit setting is rejected when --inMemoryPersistence is not specified. LokiJS storage (blob and queue metadata and table data) does not contribute to this limit and is unbounded which is the same as without the --inMemoryPersistence option.

--extentMemoryLimit <megabytes>

This option is rejected when --inMemoryPersistence is not specified.

When the limit is reached, write operations to the blob or queue endpoints which carry content will fail with an HTTP 409 status code, a custom storage error code of MemoryExtentStoreAtSizeLimit, and a helpful error message. Well-behaved storage SDKs and tools will not a retry on this failure and will return a related error message. If this error is met, consider deleting some in-memory content (blobs or queues), raising the limit, or restarting the Azurite server thus resetting the storage completely.

Note that if many hundreds of megabytes of content (queue message or blob content) are stored in-memory, it can take noticeably longer than usual for the process to terminate since all the consumed memory needs to be released.

Command Line Options Differences between Azurite V2

Azurite V3 supports SharedKey, Account Shared Access Signature (SAS), Service SAS, OAuth, and Public Container Access authentications, you can use any Azure Storage SDKs or tools like Storage Explorer to connect Azurite V3 with any authentication strategy.

An option to bypass authentication is NOT provided in Azurite V3.

Supported Environment Variable Options

When starting Azurite from npm command line azurite or docker image, following environment variables are supported for advanced customization.

Customized Storage Accounts & Keys

Azurite V3 allows customizing storage account names and keys by providing environment variable AZURITE_ACCOUNTS with format account1:key1[:key2];account2:key1[:key2];....

For example, customize one storage account which has only one key:

set AZURITE_ACCOUNTS=account1:key1

Or customize multi storage accounts and each has 2 keys:

set AZURITE_ACCOUNTS=account1:key1:key2;account2:key1:key2

Azurite will refresh customized account name and key from environment variable every minute by default. With this feature, we can dynamically rotate account key, or add new storage accounts on the air without restarting Azurite instance.

Note. Default storage account devstoreaccount1 will be disabled when providing customized storage accounts.

Note. The account keys must be base64 encoded string.

Note. Should update connection string accordingly if using customized account name and key.

Note. Use export keyword to set environment variable in Linux like environment, set in Windows.

Note. When changing storage account name, keep these rules in mind as same as Azure Storage Account:

  • Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.

Customized Metadata Storage by External Database (Preview)

By default, Azurite leverages loki as metadata database. However, as an in-memory database, loki limits Azurite's scalability and data persistency. Set environment variable AZURITE_DB=dialect://[username][:password][@]host:port/database to make Azurite blob service switch to a SQL database based metadata storage, like MySql, SqlServer.

For example, connect to MySql or SqlServer by set environment variables:

set AZURITE_DB=mysql://username:password@localhost:3306/azurite_blob
set AZURITE_DB=mssql://username:password@localhost:1024/azurite_blob

When Azurite starts with above environment variable, it connects to the configured database, and creates tables if not exist. This feature is in preview, when Azurite changes database table schema, you need to drop existing tables and let Azurite regenerate database tables.

Note. Need to manually create database before starting Azurite instance.

Note. Blob Copy & Page Blob are not supported by SQL based metadata implementation.

Tips. Create database instance quickly with docker, for example docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:latest. Grant external access and create database azurite_blob using docker exec mysql mysql -u root -pmy-secret-pw -e "GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES; create database azurite_blob;". Notice that, above commands are examples, you need to carefully define the access permissions in your production environment.

HTTPS Setup

Azurite natively supports HTTPS with self-signed certificates via the --cert and --key/--pwd options. You have two certificate type options: PEM or PFX. PEM certificates are split into "cert" and "key" files. A PFX certificate is a single file that can be assigned a password.

PEM

Generate PEM Certificate and Key

You have a few options to generate PEM certificate and key files. We'll show you how to use mkcert and OpenSSL.

mkcert

mkcert is a utility that makes the entire self-signed certificate process much easier because it wraps a lot of the complex commands that you need to manually execute with other utilities.

Generate Certificate and Key with mkcert
  1. Install mkcert: https://github.com/FiloSottile/mkcert#installation. We like to use choco choco install mkcert, but you can install with any mechanism you'd like.
  2. Run the following commands to install the Root CA and generate a cert for Azurite.
mkcert -install
mkcert 127.0.0.1

That will create two files. A certificate file: 127.0.0.1.pem and a key file: 127.0.0.1-key.pem.

Start Azurite with HTTPS and PEM

Then you start Azurite with that cert and key.

azurite --cert 127.0.0.1.pem --key 127.0.0.1-key.pem

If you start Azurite with docker, you need to map the folder contains the cert and key files to docker. In following example, the local folder c:/azurite contains the cert and key files, and map it to /workspace on docker.

docker run -p 10000:10000 -p 10001:10001 -p 10002:10002 -v c:/azurite:/workspace  mcr.microsoft.com/azure-storage/azurite azurite --blobHost 0.0.0.0  --queueHost 0.0.0.0 --tableHost 0.0.0.0 --cert /workspace/127.0.0.1.pem --key /workspace/127.0.0.1-key.pem
OpenSSL

OpenSSL is a TLS/SSL toolkit. You can use it to generate certificates. It is more involved than mkcert, but has more options.

Install OpenSSL on Windows
  1. Download and install the OpenSSL v1.1.1a+ EXE from http://slproweb.com/products/Win32OpenSSL.html
  2. Set the following environment variables
set OPENSSL_CONF=c:\OpenSSL-Win32\bin\openssl.cfg
set Path=%PATH%;c:\OpenSSL-Win32\bin
Generate Certificate and Key

Execute the following command to generate a cert and key with OpenSSL.

openssl req -newkey rsa:2048 -x509 -nodes -keyout key.pem -new -out cert.pem -sha256 -days 365 -addext "subjectAltName=IP:127.0.0.1" -subj "/C=CO/ST=ST/L=LO/O=OR/OU=OU/CN=CN"

The -subj values are required, but do not have to be valid. The subjectAltName must contain the Azurite IP address.

Add Certificate to Trusted Root Store

You then need to add that certificate to the Trusted Root Certification Authorities. This is required to work with Azure SDKs and Storage Explorer.

Here's how to do that on Windows:

certutil –addstore -enterprise –f "Root" cert.pem

Start Azurite with HTTPS and PEM

Then you start Azurite with that cert and key.

Azurite --cert cert.pem --key key.pem

NOTE: If you are using the Azure SDKs, then you will also need to pass the --oauth basic option.

PFX

Generate PFX Certificate

You first need to generate a PFX file to use with Azurite.

You can use the following command to generate a PFX file with dotnet dev-certs, which is installed with the .NET Core SDK.

dotnet dev-certs https --trust -ep cert.pfx -p <password>

Storage Explorer does not currently work with certificates produced by dotnet dev-certs. While you can use them for Azurite and Azure SDKs, you won't be able to access the Azurite endpoints with Storage Explorer if you are using the certs created with dotnet dev-certs. We are tracking this issue on GitHub here: microsoft/AzureStorageExplorer#2859

Start Azurite with HTTPS and PFX

Then you start Azurite with that cert and password.

azurite --cert cert.pfx --pwd pfxpassword

NOTE: If you are using the Azure SDKs, then you will also need to pass the --oauth basic option.

Start Azurite

Usage with Azure Storage SDKs or Tools

Default Storage Account

Azurite V3 provides support for a default storage account as General Storage Account V2 and associated features.

  • Account name: devstoreaccount1
  • Account key: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==

Note. Besides SharedKey authentication, Azurite V3 supports account, OAuth, and service SAS authentication. Anonymous access is also available when container is set to allow public access.

Customized Storage Accounts & Keys

As mentioned by above section. Azurite V3 allows customizing storage account names and keys by providing environment variable AZURITE_ACCOUNTS with format account1:key1[:key2];account2:key1[:key2];.... Account keys must be base64 encoded string.

For example, customize one storage account which has only one key:

set AZURITE_ACCOUNTS="account1:key1"

Or customize multi storage accounts and each has 2 keys:

set AZURITE_ACCOUNTS="account1:key1:key2;account2:key1:key2"

Connection Strings

HTTP Connection Strings

You can pass the following connection strings to the Azure SDKs or tools (like Azure CLI 2.0 or Storage Explorer)

The full connection string is:

DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;TableEndpoint=http://127.0.0.1:10002/devstoreaccount1;

Take blob service only, the full connection string is:

DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;

Or if the SDK or tools support following short connection string:

UseDevelopmentStorage=true;

HTTPS Connection Strings

The full HTTPS connection string is:

DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;TableEndpoint=https://127.0.0.1:10002/devstoreaccount1

To use the Blob service only, the HTTPS connection string is:

DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;

If you used dotnet dev-certs to generate your self-signed certificate, then you need to use the following connection string, because that only generates a cert for localhost, not 127.0.0.1.

DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://localhost:10000/devstoreaccount1;QueueEndpoint=https://localhost:10001/devstoreaccount1;

Azure SDKs

To use Azurite with the Azure SDKs, you can use OAuth with HTTPS options:

azurite --oauth basic --cert certname.pem --key certname-key.pem

Azure Blob Storage

You can then instantiate BlobContainerClient, BlobServiceClient, or BlobClient.

// With container url and DefaultAzureCredential
var client = new BlobContainerClient(new Uri("https://127.0.0.1:10000/devstoreaccount1/container-name"), new DefaultAzureCredential());

// With connection string
var client = new BlobContainerClient("DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;", "container-name");

// With account name and key
var client = new BlobContainerClient(new Uri("https://127.0.0.1:10000/devstoreaccount1/container-name"), new StorageSharedKeyCredential("devstoreaccount1", "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="));

Azure Queue Storage

You can also instantiate QueueClient or QueueServiceClient.

// With queue url and DefaultAzureCredential
var client = new QueueClient(new Uri("https://127.0.0.1:10001/devstoreaccount1/queue-name"), new DefaultAzureCredential());

// With connection string
var client = new QueueClient("DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;", "queue-name");

// With account name and key
var client = new QueueClient(new Uri("https://127.0.0.1:10001/devstoreaccount1/queue-name"), new StorageSharedKeyCredential("devstoreaccount1", "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="));

Storage Explorer

Storage Explorer with Azurite HTTP

Connect to Azurite by click "Add Account" icon, then select "Attach to a local emulator" and click "Connect".

Storage Explorer with Azurite HTTPS

By default Storage Explorer will not open an HTTPS endpoint that uses a self-signed certificate. If you are running Azurite with HTTPS, then you are likely using a self-signed certificate. Fortunately, Storage Explorer allows you to import SSL certificates via the Edit -> SSL Certificates -> Import Certificates dialog.

Import Certificate to Storage Explorer
  1. Find the certificate on your local machine.
    • OpenSSL: You can find the PEM file at the location you created in the HTTPS Setup section above.
    • mkcert: You need to import the RootCA.pem file, which can be found by executing this command in the terminal: mkcert -CAROOT. For mkcert, you want to import the RootCA.pem file, not the certificate file you created.
    • dotnet dev-certs: Storage Explorer doesn't currently work with certs produced by dotnet dev-certs. We are tracking this issue on GitHub here: microsoft/AzureStorageExplorer#2859
  2. Open Storage Explorer -> Edit -> SSL Certificates -> Import Certificates and import your certificate.

If you do not set this, then you will get the following error:

unable to verify the first certificate

or

self signed certificate in chain
Add Azurite via HTTPS Connection String

Follow these steps to add Azurite HTTPS to Storage Explorer:

  1. Right click on Local & Attached -> Storage Accounts and select "Connect to Azure Storage...".
  2. Select "Use a connection string" and click Next.
  3. Enter a name, i.e Azurite.
  4. Enter the HTTPS connection string from the previous section of this document and click Next.

You can now explore the Azurite HTTPS endpoints with Storage Explorer.

Workspace Structure

Following files or folders may be created when initializing Azurite in selected workspace location.

  • azurite_db_blob.json Metadata file used by Azurite blob service. (No when starting Azurite against external database)
  • azurite_db_blob_extent.json Extent metadata file used by Azurite blob service. (No when starting Azurite against external database)
  • blobstorage Persisted bindary data by Azurite blob service.
  • azurite_db_queue.json Metadata file used by Azurite queue service. (No when starting Azurite against external database)
  • azurite_db_queue_extent.json Extent metadata file used by Azurite queue service. (No when starting Azurite against external database)
  • queuestorage Persisted bindary data by Azurite queue service.
  • azurite_db_table.json Metadata file used by Azurite table service.

Note. Delete above files and folders and restart Azurite to clean up Azurite. It will remove all data stored in Azurite!!

Differences between Azurite and Azure Storage

Because Azurite runs as a local instance for persistent data storage, there are differences in functionality between Azurite and an Azure storage account in the cloud.

Storage Accounts

You could enable multiple accounts by setting up environment variable AZURITE_ACCOUNTS. See the section above.

Optionally, you could modify your hosts file, to access accounts with production-style URL. See section below.

Endpoint & Connection URL

The service endpoints for Azurite are different from those of an Azure storage account. The difference is because Azuite runs on local computer, and normally, no DNS resolves address to local.

When you address a resource in an Azure storage account, use the following scheme. The account name is part of the URI host name, and the resource being addressed is part of the URI path:

<http|https>://<account-name>.<service-name>.core.windows.net/<resource-path>

For example, the following URI is a valid address for a blob in an Azure storage account:

https://myaccount.blob.core.windows.net/mycontainer/myblob.txt

IP-style URL

However, because Azuite runs on local computer, it use IP-style URI by default, and the account name is part of the URI path instead of the host name. Use the following URI format for a resource in Azurite:

http://<local-machine-address>:<port>/<account-name>/<resource-path>

For example, the following address might be used for accessing a blob in Azurite:

http://127.0.0.1:10000/myaccount/mycontainer/myblob.txt

The service endpoints for Azurite blob service:

http://127.0.0.1:10000/<account-name>/<resource-path>

Production-style URL

Optionally, you could modify your hosts file, to access an account with production-style URL.

First, add line(s) to your hosts file, like:

127.0.0.1 account1.blob.localhost
127.0.0.1 account1.queue.localhost
127.0.0.1 account1.table.localhost

Secondly, set environment variables to enable customized storage accounts & keys:

set AZURITE_ACCOUNTS="account1:key1:key2"

You could add more accounts. See the section above.

Finally, start Azurite and use a customized connection string to access your account.

In the connection string below, it is assumed default ports are used.

DefaultEndpointsProtocol=http;AccountName=account1;AccountKey=key1;BlobEndpoint=http://account1.blob.localhost:10000;QueueEndpoint=http://account1.queue.localhost:10001;TableEndpoint=http://account1.table.localhost:10002;

Note. Do not access default account in this way with Azure Storage Explorer. There is a bug that Storage Explorer is always adding account name in URL path, causing failures.

Note. When use Production-style URL to access Azurite, by default the account name should be the host name in FQDN, like "http://devstoreaccount1.blob.localhost:10000/container". To use Production-style URL with account name in URL path, like "http://foo.bar.com:10000/devstoreaccount1/container", please start Azurite with --disableProductStyleUrl.

Note. If use "host.docker.internal" as request Uri host, like "http://host.docker.internal:10000/devstoreaccount1/container", Azurite will always get account name from request Uri path, not matter Azurite start with --disableProductStyleUrl or not.

Scalability & Performance

Please reach to us if you have requirements or suggestions for a distributed Azurite implementation or higher performance.

Azurite is not a scalable storage service and does not support many concurrent clients. There is also no performance and TPS guarantee, they highly depend on the environments Azurite has deployed.

Error Handling

Please reach to us if you have requirements or suggestions for a specific error handling.

Azurite tries to align with Azure Storage error handling logic, and provides best-efforts alignment based on Azure Storage online documentation. But CANNOT provide 100% alignment, such as error messages (returned in error response body) maybe different (while error status code will align).

API Version Compatible Strategy

Azurite V3 follows a Try best to serve compatible strategy with Azure Storage API versions:

  • An Azurite V3 instance has a baseline Azure Storage API version.
    • A Swagger definition (OpenAPI doc) with the same API version will be used to generate protocol layer APIs and interfaces.
    • Azurite should implement all the possible features provided in this API service version.
  • If an incoming request has the same API version Azurite provides, Azurite should handle the request with parity to Azure Storage.
  • If an incoming request has a higher API version than Azurite, Azurite will return a InvalidHeaderValue error for x-ms-version (HTTP status code 400 - Bad Request).
  • If an incoming request has a lower API version header than Azurite, Azurite will attempt to handle the request with Azurite's baseline API version behavior instead of that specified in the request.
  • Azurite will return API version in response header as the baseline API version
  • SAS accepts pattern from API version 2015-04-05

RA-GRS

Azurite supports read-access geo-redundant replication (RA-GRS). For storage resources both in the cloud and in the local emulator, you can access the secondary location by appending -secondary to the account name. For example, the following address might be used for accessing a blob using the secondary in Azurite:

http://127.0.0.1:10000/devstoreaccount1-secondary/mycontainer/myblob.txt

Note. Secondary endpoint is not read-only in Azurite, which diffs from Azure Storage.

Differences between Azurite V3 and Azurite V2

Both Azurite V3 and Azurite V2 aim to provide a convenient emulation for customers to quickly try out Azure Storage services locally. There are lots of differences between Azurite V3 and legacy Azurite V2.

Architecture

Architecture in Azurite V3 has been refactored, it's more flexible and robust. It provides the flexibility to support following scenarios in the future:

  • Use other HTTP frameworks instead of express.js
  • Customized new handler layer implementation, such as redirecting requests to Azure Storage services
  • Implement and inject a new persistency layer implementation, such as one based on a different database service
  • Provide support for multiple azure storage accounts and authentication
  • Detailed debug logging for easy issue investigation and request tracking
  • Create HTTPS server
  • ...

Server Code Generator

Azurite V3 leverages a TypeScript server code generator based on Azure Storage REST API swagger specifications. This reduces manual efforts and ensures alignment with the API implementation.

TypeScript

Azurite V3 selected TypeScript as its' programming language, as this facilitates broad collaboration, whilst also ensuring quality.

Features Scope

Legacy Azurite V2 supports Azure Storage Blob, Queue and Table services. Azurite V3 currently only supports Azure Storage blob service. Queue service is supported after V3.2.0-preview. Table service support is currently under discussion.

Azurite V3 supports features from Azure Storage API version 2023-01-03, and will maintain parity with the latest API versions, in a more frequent update frequency than legacy Azurite V2.

TypeScript Server Code Generator

Azurite V3 leverages a TypeScript Node.js Server Code Generator to generate the majority of code from Azure Storage REST APIs swagger specification. Currently, the generator project is private, under development and only used by Azurite V3. We have plans to make the TypeScript server generator public after Azurite V3 releases. All the generated code is kept in generated folder, including the generated middleware, request and response models.

Support Matrix

Latest release targets 2024-08-04 API version blob service.

Detailed support matrix:

  • Supported Vertical Features

    • CORS and Preflight
    • SharedKey Authentication
    • OAuth authentication
    • Shared Access Signature Account Level
    • Shared Access Signature Service Level (Not support response header override in service SAS)
    • Container Public Access
  • Supported REST APIs

    • List Containers
    • Set Service Properties
    • Get Service Properties
    • Get Stats
    • Get Account Information
    • Create Container
    • Get Container Properties
    • Get Container Metadata
    • Set Container Metadata
    • Get Container ACL
    • Set Container ACL
    • Delete Container
    • Lease Container
    • List Blobs
    • Put Blob (Create append blob is not supported)
    • Get Blob
    • Get Blob Properties
    • Set Blob Properties
    • Get Blob Metadata
    • Set Blob Metadata
    • Create Append Blob, Append Block
    • Lease Blob
    • Snapshot Blob
    • Copy Blob (Only supports copy within same Azurite instance)
    • Abort Copy Blob (Only supports copy within same Azurite instance)
    • Copy Blob From URL (Only supports copy within same Azurite instance, only on Loki)
    • Access control based on conditional headers
  • Following features or REST APIs are NOT supported or limited supported in this release (will support more features per customers feedback in future releases)

    • SharedKey Lite
    • Static Website
    • Soft delete & Undelete Container
    • Soft delete & Undelete Blob
    • Incremental Copy Blob
    • Blob Tags
    • Blob Query
    • Blob Versions
    • Blob Last Access Time
    • Concurrent Append
    • Blob Expiry
    • Object Replication Service
    • Put Blob From URL
    • Version Level Worm
    • Sync copy blob by access source with oauth
    • Encryption Scope
    • Get Page Ranges Continuation Token

Latest version supports for 2024-08-04 API version queue service. Detailed support matrix:

  • Supported Vertical Features
    • SharedKey Authentication
    • Shared Access Signature Account Level
    • Shared Access Signature Service Level
    • OAuth authentication
  • Supported REST APIs
    • List Queues
    • Set Service Properties
    • Get Service Properties
    • Get Stats
    • Preflight Queue Request
    • Create Queue
    • Get Queue Metadata
    • Set Queue Metadata
    • Get Queue ACL
    • Set Queue ACL
    • Delete Queue
    • Put Message
    • Get Messages
    • Peek Messages
    • Delete Message
    • Update Message
    • Clear Message
  • Following features or REST APIs are NOT supported or limited supported in this release (will support more features per customers feedback in future releases)
    • SharedKey Lite

Latest version supports for 2024-08-04 API version table service (preview). Detailed support matrix:

  • Supported Vertical Features
    • SharedKeyLite Authentication
    • SharedKey Authentication
    • Shared Access Signature Account Level
    • Shared Access Signature Service Level
  • Supported REST APIs
    • List Tables
    • Create Table
    • Delete Table
    • Update Entity
    • Query Entities
    • Merge Entity
    • Delete Entity
    • Insert Entity
    • Batch
  • Following features or REST APIs are NOT supported or limited supported in this release (will support more features per customers feedback in future releases)
    • Set Service Properties
    • Get Service Properties
    • Get Table ACL
    • Set Table ACL
    • Get Stats
    • CORS
    • Batch Transaction
    • Query with complex conditions
    • OAuth

License

This project is licensed under MIT.

We Welcome Contributions!

Go to GitHub project page or GitHub issues for the milestone and TODO items we are used for tracking upcoming features and bug fixes.

We are currently working on Azurite V3 to implement the remaining Azure Storage REST APIs. We finished the basic structure and majority of features in Blob Storage, as can be seen in the support matrix. The detailed work items are also tracked in GitHub repository projects and issues.

Any contribution and suggestions for Azurite V3 is welcome, please goto CONTRIBUTION.md for detailed contribution guidelines. Alternatively, you can open GitHub issues voting for any missing features in Azurite V3.

Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

azurite's People

Contributors

andymac4182 avatar blueww avatar codeunicornmartha avatar dependabot[bot] avatar edwin-huber avatar emmazhu avatar garrettbaski avatar isabella0428 avatar joelverhagen avatar jongio avatar manfredlange avatar mic-max avatar mihaitodor avatar mstrejczek avatar ncarlsonmsft avatar notheotherben avatar nturinski avatar ocdi avatar patrickcarnahan avatar pauljewellmsft avatar pdressel avatar runyaofan avatar seankennett avatar stekycz avatar tomhosking avatar vigneashs avatar vinjiang avatar weakchicken avatar xiaoningliu avatar zzhxiaofeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azurite's Issues

Use Typescript

Strategy:

  • Convert existing code to ES6 (require => import) - #63
  • Convert JS to TS - with minimal tsconfig and lint configuration

Each step should be merged separately, so that the git history does not look like a mess and it will be possible to track back cleanly.
This approach will also allow for "easier" PR reviews and put the codebase in a position where by we can start to nicer typescript features such as async await

Connect with Storage Explorer

I already looked into the previous issue related to storage explorer. Is it possible to connect when running on Mac OSX? I tried to connect using the connection string provided in the Readme (below) but I get an unhelpful error from explorer.

DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;

screen shot 2018-03-26 at 12 30 26 am

Azure Table Query not working

I am doing a simple search on partitionkey and rowkey -> no data is being returned. If I remove all filters from the filter I get the data back.

I have tried this using WindowsAzure.Storage nuget package lib and also the Microsoft Azure Storage Explorer - both produce the same result

**PANIC** while creating storage container policy

Hi,

I'm using azurite version: 2.6.5 in a docker container on Windows 10 and docker-ce in WSL bash.

When I try to create a storage container policy with:

export AZURE_STORAGE_CONNECTION_STRING=BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;TableEndpoint=http://127.0.0.1:10002/devstoreaccount1;DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;

docker run --rm -d -p 10000:10000 arafato/azurite:2.6.5
az storage container create --name test
az storage container policy create --name read --permission rl --container-name test

I'm getting the following error:

PUT /devstoreaccount1/test?restype=container&timeout=30 201 9.546 ms - 0
GET /devstoreaccount1/test?comp=acl&restype=container 200 6.874 ms - 59
PANIC Something unexpected happened! Blob Storage Emulator may be in an inconsistent state!
TypeError: Cannot read property '0' of undefined
at parseStringAsync.then (/opt/azurite/lib/xml/Serializers.js:19:81)
at tryCatcher (/opt/azurite/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/opt/azurite/node_modules/bluebird/js/release/promise.js:512:31)
at Promise._settlePromise (/opt/azurite/node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromiseCtx (/opt/azurite/node_modules/bluebird/js/release/promise.js:606:10)
at Async._drainQueue (/opt/azurite/node_modules/bluebird/js/release/async.js:138:12)
at Async._drainQueues (/opt/azurite/node_modules/bluebird/js/release/async.js:143:10)
at Immediate.Async.drainQueues [as _onImmediate] (/opt/azurite/node_modules/bluebird/js/release/async.js:17:14)
at runCallback (timers.js:763:18)
at tryOnImmediate (timers.js:734:5)
at processImmediate (timers.js:716:5)
PANIC Something unexpected happened! Blob Storage Emulator may be in an inconsistent state!
TypeError: Cannot read property '0' of undefined
at parseStringAsync.then (/opt/azurite/lib/xml/Serializers.js:19:81)
at tryCatcher (/opt/azurite/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/opt/azurite/node_modules/bluebird/js/release/promise.js:512:31)
at Promise._settlePromise (/opt/azurite/node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromiseCtx (/opt/azurite/node_modules/bluebird/js/release/promise.js:606:10)
at Async._drainQueue (/opt/azurite/node_modules/bluebird/js/release/async.js:138:12)
at Async._drainQueues (/opt/azurite/node_modules/bluebird/js/release/async.js:143:10)
at Immediate.Async.drainQueues [as _onImmediate] (/opt/azurite/node_modules/bluebird/js/release/async.js:17:14)
at runCallback (timers.js:763:18)
at tryOnImmediate (timers.js:734:5)
at processImmediate (timers.js:716:5)
PANIC Something unexpected happened! Blob Storage Emulator may be in an inconsistent state!
TypeError: Cannot read property '0' of undefined
at parseStringAsync.then (/opt/azurite/lib/xml/Serializers.js:19:81)
at tryCatcher (/opt/azurite/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/opt/azurite/node_modules/bluebird/js/release/promise.js:512:31)
at Promise._settlePromise (/opt/azurite/node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromiseCtx (/opt/azurite/node_modules/bluebird/js/release/promise.js:606:10)
at Async._drainQueue (/opt/azurite/node_modules/bluebird/js/release/async.js:138:12)
at Async._drainQueues (/opt/azurite/node_modules/bluebird/js/release/async.js:143:10)
at Immediate.Async.drainQueues [as _onImmediate] (/opt/azurite/node_modules/bluebird/js/release/async.js:17:14)js:17:1
4)
at Immediate.Async.drainQueues [as _onImmediate] (/opt/azurite/node_modules/bluebird/js/release/async
.js:17
:14)
at runCallback (timers.js:763:18)
at tryOnImmediate (timers.js:734:5)
at Immediate.Async.drainQueues [as _onImmediate] (/opt/azurite/node_modules/bluebird/js/release/async
.UT /devstoreaccount1/test?comp=acl&restype=container - - ms - -
js:17:14)
at runCallback (timers.js:763:18) ) at tryOnImmediate (timers.js:734:5)
at processImmediate (timers.js:716:5)
PUT /devstoreaccount1/test?comp=acl&restype=container - - ms - - )
PANIC Something unexpected happened! Blob Storage Emulator may be in an inconsistent state! js:17:14TypeError: Cannot read property '0' of undefined .
at parseStringAsync.then (/opt/azurite/lib/xml/Serializers.js:19:81)
at tryCatcher (/opt/azurite/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/opt/azurite/node_modules/bluebird/js/release/promise.js:512:31
) at Immediate.Async.drainQueues [as _onImmediate] (/opt/azurite/node_modules/bluebird/js/release/async
at Promise._settlePromise (/opt/azurite/node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromiseFromHandler (/opt/azurite/node_modules/bluebird/js/release/promise.js:512:31) at Promise._settlePromise (/opt/azurite/node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromiseCtx (/opt/azurite/node_modules/bluebird/js/release/promise.js:606:10)
at Async._drainQueue (/opt/azurite/node_modules/bluebird/js/release/async.js:138:12) at Async._drainQueues (/opt/azurite/node_modules/bluebird/js/release/async.js:143:10)
at Immediate.Async.drainQueues [as _onImmediate] (/opt/azurite/node_modules/bluebird/js/release/async.js:17:14)
at runCallback (timers.js:763:18) at tryOnImmediate (timers.js:734:5)
at processImmediate (timers.js:716:5)
PANIC Something unexpected happened! Blob Storage Emulator may be in an inconsistent state!
TypeError: Cannot read property '0' of undefined
at parseStringAsync.then (/opt/azurite/lib/xml/Serializers.js:19:81)
at tryCatcher (/opt/azurite/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/opt/azurite/node_modules/bluebird/js/release/promise.js:512:31)
at Promise._settlePromise (/opt/azurite/node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromiseCtx (/opt/azurite/node_modules/bluebird/js/release/promise.js:606:10) at Async._drainQueue (/opt/azurite/node_modules/bluebird/js/release/async.js:138:12)
at Async._drainQueues (/opt/azurite/node_modules/bluebird/js/release/async.js:143:10)
at Immediate.Async.drainQueues [as _onImmediate] (/opt/azurite/node_modules/bluebird/js/release/async.js:17:14)
at runCallback (timers.js:763:18)
at tryOnImmediate (timers.js:734:5)
at processImmediate (timers.js:716:5)PUT /devstoreaccount1/test?comp=acl&restype=container - - ms - -
PUT /devstoreaccount1/test?comp=acl&restype=container - - ms - -
PUT /devstoreaccount1/test?comp=acl&restype=container - - ms - -
PUT /devstoreaccount1/test?comp=acl&restype=container - - ms - -

Get Blob returns "501 Not Implemented"

Problem description

While Get Blob requests seem to work perfectly fine from the az CLI, 501 errors are systematically returned when using the Blob SDK for Go.

CLI results

$ az storage blob download \
    -f /tmp/test.zip -n test.zip --container-name tests \
    --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;'
output
Finished[#############################################################]  100.0000%
{
  "content": null,
  "deleted": false,
  "metadata": {},
  "name": "test.zip",
  "properties": {
    "appendBlobCommittedBlockCount": null,
    "blobTier": null,
    "blobTierChangeTime": null,
    "blobTierInferred": false,
    "blobType": "BlockBlob",
    "contentLength": 3108,
    "contentRange": "bytes 0-3107/3108",
    "contentSettings": {
      "cacheControl": null,
      "contentDisposition": null,
      "contentEncoding": null,
      "contentLanguage": null,
      "contentMd5": "undefined",
      "contentType": "application/zip"
    },
    "copy": {
      "completionTime": null,
      "id": null,
      "progress": null,
      "source": null,
      "status": null,
      "statusDescription": null
    },
    "deletedTime": null,
    "etag": "\"5y1Za9ku1PTKog6mbmce+DM9xLQ\"",
    "lastModified": "2018-06-22T18:26:16+00:00",
    "lease": {
      "duration": null,
      "state": null,
      "status": null
    },
    "pageBlobSequenceNumber": null,
    "remainingRetentionDays": null,
    "serverEncrypted": null
  },
  "snapshot": null
}

SDK results

package main

import (
	"context"
	"fmt"
	"net/url"

	"github.com/Azure/azure-storage-blob-go/2017-07-29/azblob"
)

const (
	accountName        = "devstoreaccount1"
	accountKey         = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
	blobEndpoint       = "127.0.0.1:10000"
	blobEndpointScheme = "http"
	blobContainer      = "tests"
	blobName           = "test.zip"
)

func main() {
	credential := azblob.NewSharedKeyCredential(accountName, accountKey)

	pipeline := azblob.NewPipeline(credential, azblob.PipelineOptions{})

	containerURL := azblob.NewContainerURL(url.URL{
		Scheme: blobEndpointScheme,
		Host:   blobEndpoint,
		Path:   "/" + blobContainer,
	}, pipeline)

	blobURL := containerURL.NewBlockBlobURL(blobName)

	ctx := context.Background()

	resp, err := blobURL.Download(ctx, 0, 0, azblob.BlobAccessConditions{}, false)
	if err != nil {
		fmt.Print(err)
		return
	}
	fmt.Print(resp.ContentType())
}
$ go run main.go
output
-> blob/vendor/github.com/Azure/azure-storage-blob-go/2017-07-29/azblob.NewResponseError, /go/src/blob/vendor/github.com/Azure/azure-storage-blob-go/2017-07-29/azblob/zz_generated_response_error.go:29
===== RESPONSE ERROR (ServiceCode=) =====
Description=failed to unmarshal response body, Details: (none)
   GET http://127.0.0.1:10000/tests/test.zip?timeout=61
   Authorization: REDACTED
   User-Agent: [Azure-Storage/0.1 (go1.10.3; darwin)]
   X-Ms-Client-Request-Id: [e574d5cf-4592-443b-517c-13decb79d178]
   X-Ms-Date: [Fri, 22 Jun 2018 18:36:59 GMT]
   X-Ms-Version: [2017-07-29]
   --------------------------------------------------------------------------------
   RESPONSE Status: 501 Not Implemented
   Connection: [keep-alive]
   Content-Length: [229]
   Content-Type: [text/html; charset=utf-8]
   Date: [Fri, 22 Jun 2018 18:36:59 GMT]
   Etag: [W/"e5-ZoqHfPrsJlaK3wwTnhkcerEarRI"]
   X-Powered-By: [Express]


EOF

Extra information

Tested using the Docker image:

$ docker run --rm -e executable=blob -p 10000:10000 arafato/azurite
$ export AZURE_STORAGE_CONNECTION_STRING='DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;'
$ az storage container create -n tests
$ az storage blob upload -c tests -f test.zip -n test.zip

Store the state of queues for creating pre initialized Docker images

Hi,

Is it possible to persist the state of the queues, like the blob containers?
I don't need to persist messages, but persisting queues, policies and metadata could be helpful.

I would like to create a docker image for our test environment, where blob containers and queues are pre initialize.

Currently I'm using a custom start script to run azurit and create containers and queue, but this might not be the way to go.

Best
nobiehl

Test Failing : Azure-Storage-Node - BlobContainer - getContainerProperties - should work

Failing test case:
Under : ./externaltests/azure-storage-node/test/services/blob/blobservice-container-tests.js:317:20

blobService.getContainerProperties(containerName, function (getError, container2, getResponse) {
          assert.equal(getError, null);
          assert.notEqual(container2, null);
          if (container2) {
            assert.equal('unlocked', container2.lease.status);
            assert.equal('available', container2.lease.state);
            assert.equal(null, container2.lease.duration);
            assert.notEqual(null, container2.requestId);
            assert.notEqual(container2.hasImmutabilityPolicy, null);
            assert.deepStrictEqual(typeof container2.hasImmutabilityPolicy, 'boolean');
            assert.notEqual(container2.hasLegalHold, null);
            assert.deepStrictEqual(typeof container2.hasLegalHold, 'boolean');

azure-storage-node tests

   BlobContainer
     getContainerProperties
       should work:
 Uncaught AssertionError [ERR_ASSERTION]: undefined != null
  at E:\repo\azurite\Azurite\externaltests\azure-storage-node\test\services\blob\blobservice-container-tests.js:317:20
  at finalCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:698:7)
  at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\filters\retrypolicyfilter.js:189:13
  at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:801:17
  at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:1014:11
  at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:800:15
  at processResponseCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:701:5)
  at Request.processResponseCallback [as _callback] (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:329:13)
  at Request.self.callback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:185:22)
  at Request.<anonymous> (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1161:10)
  at IncomingMessage.<anonymous> (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1083:12)
  at endReadableNT (_stream_readable.js:1045:12)
  at _combinedTickCallback (internal/process/next_tick.js:138:11)
  at process._tickCallback (internal/process/next_tick.js:180:9)

Could we format logs better?

Currently (working with 0.9.7), if I open Docker container logs in a text reader I see something like

�[0mGET /account/customer?restype=container&comp=list �[32m200 �[0m19.399 ms - 255�[0m

Escape chars are so annoying :(
Could we turn them off?

Also, would be really helpful to print timestamp in some way, e.g. HH:mm:ss.SSS

Thanks a lot!

Test Failing : Azure-Storage-Node - BlobContainer - setContainerAcl - should work

Failing test case:
Under : ./externaltests/azure-storage-node/test/services/blob/blobservice-container-tests.js:516:18

var options = {publicAccessLevel: BlobUtilities.BlobContainerPublicAccessType.BLOB};
        blobService.setContainerAcl(containerName, null, options, function (setAclError, setAclContainer1, setResponse1) {
          assert.equal(setAclError, null);
          assert.notEqual(setAclContainer1, null);
          assert.ok(setResponse1.isSuccessful);

azure-storage-node tests
base.js:266
BlobContainer
setContainerAcl
should work:
Uncaught AssertionError [ERR_ASSERTION]: { Error
at Function.StorageServiceClient._normalizeError (E:\repo\azurite\Azurite\externaltests\azure-storage-node\ == null
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\test\services\blob\blobservice-container-tests.js:516:18
at finalCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:949:7)
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\filters\retrypolicyfilter.js:189:13
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:801:17
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:1014:11
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:800:15
at processResponseCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:952:5)
at Request.processResponseCallback [as _callback] (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:329:13)
at Request.self.callback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:185:22)
at Request. (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1161:10)
at IncomingMessage. (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1083:12)
at endReadableNT (_stream_readable.js:1045:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)

Doesn't work when trying to acquire a lease with Azure function core tool

Hi there,

I have an Azure durable function running locally through Azure core function tools. The main entry function is running fine, but it has problem of triggering a orchestration function through orchestration client. From the container log, I can see requests to PUT /devstoreaccount1/myfunc-leases/default/myfunc-control-00?comp=lease failed with 400 error. It seems to have problem with x-ms-proposed-lease-id.

Is there anyway I can see the full request in log?

I have no problem of running this against a real storage account.

createTableIfNotExists fails with TableNotFound when table does not exist

I'm using azure/azure-storage-node.
This is the code that fails (in azure/azure-storage-node)

There is the check

else if (responseObject.error && responseObject.error.statusCode === Constants.HttpConstants.HttpResponseCodes.NotFound) {
      responseObject.error = null;
      responseObject.tableResult.exists = false;
      responseObject.response.isSuccessful = true;
    }

But the responseObject is as following:

{ error:
   { StorageError: <?xml version="1.0" encoding="utf-8"?><Error><Code>TableNotFound</Code><Message>The table specified does not exist.</Message></Error>
    at Function.StorageServiceClient._normalizeError (/Users/alexander/Development/Sandbox/azurite-check/node-service/node_modules/azure-storage/lib/common/services/storageserviceclient.js:1153:12)
    at TableService.StorageServiceClient._processResponse (/Users/alexander/Development/Sandbox/azurite-check/node-service/node_modules/azure-storage/lib/common/services/storageserviceclient.js:738:50)
    at Request.processResponseCallback [as _callback] (/Users/alexander/Development/Sandbox/azurite-check/node-service/node_modules/azure-storage/lib/common/services/storageserviceclient.js:311:37)
    at Request.self.callback (/Users/alexander/Development/Sandbox/azurite-check/node-service/node_modules/request/request.js:186:22)
    at emitTwo (events.js:126:13)
    at Request.emit (events.js:214:7)
    at Request.<anonymous> (/Users/alexander/Development/Sandbox/azurite-check/node-service/node_modules/request/request.js:1163:10)
    at emitOne (events.js:116:13)
    at Request.emit (events.js:211:7)
    at IncomingMessage.<anonymous> (/Users/alexander/Development/Sandbox/azurite-check/node-service/node_modules/request/request.js:1085:12)
     name: 'StorageError',
     message: '<?xml version="1.0" encoding="utf-8"?><Error><Code>TableNotFound</Code><Message>The table specified does not exist.</Message></Error>' },
  response:
   { isSuccessful: false,
     statusCode: 404,
     body: '<?xml version="1.0" encoding="utf-8"?><Error><Code>TableNotFound</Code><Message>The table specified does not exist.</Message></Error>',
     headers:
      { 'x-powered-by': 'Express',
        'content-type': 'text/html; charset=utf-8',
        'content-length': '133',
        etag: 'W/"85-QChnKDYjVl8qDySHFUJOL2CNEFU"',
        date: 'Tue, 10 Apr 2018 08:09:42 GMT',
        connection: 'keep-alive' },
     md5: undefined },
  contentMD5: undefined,
  length: undefined,
  operationEndTime: 2018-04-10T08:09:39.194Z,
  targetLocation: 0,
  outputStreamSent: false,
  tableResult: { isSuccessful: false, statusCode: 404, TableName: 'mytable' } }

So responseObject.error.statusCode is undefined.

storage services authentication

I am playing with the emulator. Seems like it does not support authentication,
https://docs.microsoft.com/en-us/rest/api/storageservices/authentication-for-the-azure-storage-services
The emulator provides static account and key.

AccountName =:: devstoreaccount1
AccountKey ::= Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==

My expectation that it should fail incoming API request with a wrong account and/or key, but it seems azurite does not care much about authentication. Perhaps I am missing some configuration option?

*PANIC* when looking for non-existent row

There seems to be an inconsistency in how this emulator handles non-existent rows compared to how "real" Azure Table Storage handles them. I discovered this while using Streamstone, which builds event streaming capabilities on top of Azure Table Storage, and while I haven't been able to discern exactly what operations they're doing on the table storage, I have been able to build a minimal example that reproduces the problem.

To reproduce, make sure you have the dotnet CLI installed, and run

mkdir repro && cd repro
dotnet new console
dotnet add package Streamstone

Then replace the contents of Program.cs with this:

using System;
using System.Threading.Tasks;
using Microsoft.WindowsAzure.Storage;
using Streamstone;

namespace AzuritePanicOnStreamstoneTryOpenStream
{
    class Program
    {
        public static async Task Main(string[] args)
        {
            // Replace with CloudStorageAccount.Parse("your connection string"); for using a real Azure Storage Account
            var account = CloudStorageAccount.DevelopmentStorageAccount;

            var table = account
                .CreateCloudTableClient()
                .GetTableReference("Events");
            await table.CreateIfNotExistsAsync();

            var partition = new Partition(table, "foo");

            await Stream.TryOpenAsync(partition);
        }
    }
}

Finally, run dotnet run to see the program fail with

Unhandled Exception: Microsoft.WindowsAzure.Storage.StorageException: Unexpected response code, Expected:OK or NotFound, Received:InternalServerError
   at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteAsyncInternal[T](RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext, CancellationToken token) in C:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\WindowsRuntime\Core\Executor\Executor.cs:line 315
   at Streamstone.Stream.OpenStreamOperation.ExecuteAsync()
   at AzuritePanicOnStreamstoneTryOpenStream.Program.Main(String[] args) in C:\Work\OSS\Throwaway repros\AzuritePanicOnStreamstoneTryOpenStream\Program.cs:line 17
   at AzuritePanicOnStreamstoneTryOpenStream.Program.<Main>(String[] args)

If I look at the logs for my docker container running Azurite, I see

GET /devstoreaccount1/Events(PartitionKey='foo',RowKey='SS-HEAD') 500 0.388 ms - 27
**PANIC** Something unexpected happened! Table Storage Emulator may be in an inconsistent state!
ReferenceError: partitionKey is not defined
    at TableStorageManager.queryEntities (/opt/azurite/lib/core/table/TableStorageManager.js:104:40)
    at QueryEntities.process (/opt/azurite/lib/actions/table/QueryEntities.js:12:29)
    at Object.actions.(anonymous function) [as QueryEntity] (/opt/azurite/lib/middleware/table/actions.js:52:19)
    at BbPromise.try (/opt/azurite/lib/middleware/table/actions.js:18:38)
    at tryCatcher (/opt/azurite/node_modules/bluebird/js/release/util.js:16:23)
    at Function.Promise.attempt.Promise.try (/opt/azurite/node_modules/bluebird/js/release/method.js:39:29)
    at module.exports (/opt/azurite/lib/middleware/table/actions.js:17:18)
    at Layer.handle [as handle_request] (/opt/azurite/node_modules/express/lib/router/layer.js:95:5)
    at trim_prefix (/opt/azurite/node_modules/express/lib/router/index.js:317:13)
    at /opt/azurite/node_modules/express/lib/router/index.js:284:7
    at Function.process_params (/opt/azurite/node_modules/express/lib/router/index.js:335:12)
    at next (/opt/azurite/node_modules/express/lib/router/index.js:275:10)
    at BbPromise.try (/opt/azurite/lib/middleware/table/validation.js:32:9)
    at tryCatcher (/opt/azurite/node_modules/bluebird/js/release/util.js:16:23)
    at Function.Promise.attempt.Promise.try (/opt/azurite/node_modules/bluebird/js/release/method.js:39:29)
    at module.exports (/opt/azurite/lib/middleware/table/validation.js:18:18)

However, if I connect to a real Azure Storage Account instead of the emulator, the program runs without error.

Queues are lost when stopping docker contrainer

I'm running Azurite via Docker and what I've found is even though I'm storing the blob & table data next to my project the queue data isn't stored there as well. If I then stop & start the container any queues I had are lost.

This behavior doesn't match what the Storage Emulator does since that persists queue data when restarting. It'd be great if Azurite did this as well. I'd prefer the queues stored alongside my project with the other data, but even persisting them on the container would be better than losing them.

Requesting queue policies failed with azure-cli and Storage Account Explorer

Hi,

Currently I’m not able to list queue policies with azure cli and I’m also not able to list queue policies with the Storage Account explorer. But the policies will be created successfully.

I recognized that the storage account emulator is answering the get request different than azurite. There are some magic characters in front of the response and a zero at the end.

Storage Account Emulator 5.3.0.0
Azurite 2.6.5
Azure CLI 2.0.30
Windows 10 + WSL

1. Storage Account Emulator + azure cli

az storage queue create --name test-queue
az storage queue policy list --queue-name test-queue

RESPONSE (Fiddler):

3E
<?xml version="1.0" encoding="utf-8"?><SignedIdentifiers />
0
az storage queue policy create --name read --queue-name test-queue --permissions rp --start 2018-04-01 --expiry 2018-04-02
az storage queue policy list --queue-name test-queue

RESPONSE (Fiddler):

F7
<?xml version="1.0" encoding="utf-8"?><SignedIdentifiers><SignedIdentifier><Id>read</Id><AccessPolicy><Start>2018-04-01T00:00:00.0000000Z</Start><Expiry>2018-04-02T00:00:00.0000000Z</Expiry><Permission>rp</Permission></AccessPolicy></SignedIdentifier></SignedIdentifiers>
0

2. Azurite + azure cli

az storage queue create --name test-queue
az storage queue policy list –queue-name test-queue

RESPONSE (Fiddler):

<?xml version="1.0" encoding="utf-8"?>
<SignedIdentifiers/>
az storage queue policy create --name read --queue-name test-queue --permissions rp --start 2018-04-01 --expiry 2018-04-02
az storage queue policy list --queue-name test-queue

azure cli returns nothing. But the response from azurite looks like:
RESPONSE (Fiddler):

<?xml version="1.0" encoding="utf-8"?>
<SignedIdentifiers>
    <SignedIdentifiers>
        <Id>read</Id>
        <AccessPolicy>
            <Start>2018-04-01</Start>
            <Expiry>2018-04-02</Expiry>
            <Permission>rp</Permission>
        </AccessPolicy>
    </SignedIdentifiers>
</SignedIdentifiers>

As you can see the magic characters in front of the response are not there and the trailing zero is also not available.

Another issue because of the issue above is that azure cli requests the queue policies first, if there are some policies available it merges the policies (old+new) and sends them back. But this is not possible with azurite. All policies will be overwritten by the new one.

Currently I circumventing the issue by using custom scripts for creating queues and policies but for debugging and some test environment related topics I need to use azure cli and the Storage Account Explorer.

Best
nobiehl

Table Filters Returning No Results

I'm using Azurite, Version 2.6.5 on Mac OSX

I perform the following request and I receive all results from the database:

GET /devstoreaccount1/Signposts HTTP/1.1
Host: 127.0.0.1:10002
Authorization: SharedKey devstoreaccount1:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
x-ms-date: 1527335502
Content-Type: application/json
x-ms-version: 2017-07-29
Cache-Control: no-cache
Postman-Token: 463d93a9-df19-49d1-8e59-eaf509432d41
{
    "value": [
        {
            "Timestamp": "2018-04-19T12:20:36.776Z",
            "title": "Benefits - GOV.UK",
            "description": "The best place to find government services and information. Simpler, clearer, faster",
            "url": "https://www.gov.uk/browse/benefits",
            "image": "/static/images/govuk.png",
            "categories": "Benefits",
            "PartitionKey": "Signposts",
            "RowKey": "574d5f400d3e106660158ac3"
        },
        {
            "Timestamp": "2018-04-19T12:20:36.791Z",
            "title": "Business & Self-Employment - GOV.UK",
            "description": "The best place to find government services and information. Simpler, clearer, faster",
            "url": "https://www.gov.uk/browse/business",
            "image": "/static/images/govuk.png",
            "categories": "Business",
            "PartitionKey": "Signposts",
            "RowKey": "574d5f400d3e106660158ac4"
        }
    ]
}

If I submit a request with a query (in this case on the PartitionKey) I get no results:

GET /devstoreaccount1/Signposts?$filter=PartitionKey%20eq%20%27Signposts%27 HTTP/1.1
Host: 127.0.0.1:10002
Authorization: SharedKey devstoreaccount1:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
x-ms-date: 1527335659
Content-Type: application/json
x-ms-version: 2017-07-29
Cache-Control: no-cache
Postman-Token: ae88e955-2304-409d-820f-60e5b0a7c6d3
{
    "value": []
}

This query was replayed via postman but I first tried the query via the Micosoft.WindowsAzure.Storage library which gave the same results.

Is the query wrong or is this a bug in Azurite?

Thanks

Simon

Tag latest releases

The latest release on npm is v2.6.5, but on GitHub the last tag was v2.6.3. Could you please tag the 2 missing versions on Github?

`blobPort` option doesn't work

  1. Launch Azurite with blobPort=10100
  2. Observe output

Expected:
"Azure Blob Storage Emulator listening on port 10100"

Actual:
"Azure Blob Storage Emulator listening on port 10000"

PS C:\Users\cralvord> azurite -l C:\Users\cralvord\azurite\ -blobPort=10100 -queuePort=10101 -tablePort=10102

 _______                   _
(_______)                 (_)  _
 _______ _____ _   _  ____ _ _| |_ _____
|  ___  (___  ) | | |/ ___) (_   _) ___ |
| |   | |/ __/| |_| | |   | | | |_| ____|
|_|   |_(_____)____/|_|   |_| \__)_____)

Azurite, Version 2.6.5
A lightweight server clone of Azure Storage

Azure Table Storage Emulator listening on port 10102
Azure Queue Storage Emulator listening on port 10101
Azure Blob Storage Emulator listening on port 10000

Test Failing : Azure-Storage-Node - BlobContainer - createContainer - should work

Failing test case:
Under : ./externaltests/azure-storage-node/test/services/blob/blobservice-container-tests.js

        // creating again will result in a duplicate error
        blobService.createContainer(containerName, function (createError2, container2) {
          assert.equal(createError2.code, Constants.BlobErrorCodeStrings.CONTAINER_ALREADY_EXISTS);
          assert.equal(container2, null);

azure-storage-node tests
base.js:266
BlobContainer
createContainer
should work:
Uncaught AssertionError [ERR_ASSERTION]: undefined == 'ContainerAlreadyExists'
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\test\services\blob\blobservice-container-tests.js:166:18
at finalCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:568:7)
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\filters\retrypolicyfilter.js:189:13
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:801:17
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:1014:11
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:800:15
at processResponseCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:571:5)
at Request.processResponseCallback [as _callback] (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:329:13)
at Request.self.callback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:185:22)
at Request. (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1161:10)
at IncomingMessage. (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1083:12)
at endReadableNT (_stream_readable.js:1045:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)

Test Failing : Azure-Storage-Node - BlobContainer - listBlobs - shoud work

Failing test case:
Under : ./externaltests/azure-storage-node/test/services/blob/blobservice-container-tests.js:709:26

// Test listing multiple blobs
              listBlobsWithoutPrefix(null, null, function() {
                assert.equal(blobs.length, 2);

                var entries = 0;
                blobs.forEach(function (blob) {
                  assert.notEqual(blob.creationTime, null);
                  assert.deepStrictEqual(typeof blob.creationTime, 'string');

                  if (blob.name === blobName1) {
                    entries += 1;
                  }
                  else if (blob.name === blobName2) {
                    entries += 2;
                  }
                });

                assert.equal(entries, 3);

azure-storage-node tests
base.js:266
BlobContainer
listBlobs
should work:
Uncaught AssertionError [ERR_ASSERTION]: undefined != null
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\test\services\blob\blobservice-container-tests.js:709:26
at Array.forEach ()
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\test\services\blob\blobservice-container-tests.js:708:23
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\test\services\blob\blobservice-container-tests.js:682:13
at finalCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:5824:7)
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\filters\retrypolicyfilter.js:189:13
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:801:17
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:1014:11
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:800:15
at processResponseCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:5827:5)
at Request.processResponseCallback [as _callback] (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:329:13)
at Request.self.callback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:185:22)
at Request. (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1161:10)
at IncomingMessage. (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1083:12)
at endReadableNT (_stream_readable.js:1045:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)

Standalone windows executables

In the Nuget package: "Azurite is packaged into a single self-contained executable"
(Side note: Opening the .nupkg as an archive reveals it's actually 3 executables: blob.exe, queue.exe, and table.exe)

Is there any reason that those aren't being published outside of the Nuget package? It seems more convenient to me than having to install it to a particular VS solution. (And easier than messing with Node.js. And less memory intensive than using Docker.)

Incorrect blob type when fetching the copy status

Following code uploads a 16MB binary and then copies it on itself:

CloudStorageAccount cloud = CloudStorageAccount.parse("DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;");
CloudBlobContainer container = cloud.createCloudBlobClient().getContainerReference("test");
container.deleteIfExists();
container.create();

for (int i = 0; i < 100; i++) {
    CloudBlockBlob blob = container.getBlockBlobReference("test" + i);
    System.out.print("Uploading: " + blob.getName() + "... ");
    blob.uploadFromByteArray(new byte[16 * 1024 * 1024], 0, 16 * 1024 * 1024);
    System.out.println("Done");

    blob.startCopy(blob);
    CopyStatus status;
    while (true) {
        blob.downloadAttributes();
        status = blob.getCopyState().getStatus();
        if (status != CopyStatus.PENDING) {
            break;
        }
        Thread.sleep(200);
    }
    Assert.assertEquals(CopyStatus.SUCCESS, status);
}

Following exception is thrown randomly after a few iterations:

com.microsoft.azure.storage.StorageException: Incorrect Blob type, please use the correct Blob type to access a blob on the server. Expected BLOCK_BLOB, actual UNSPECIFIED.

	at com.microsoft.azure.storage.blob.CloudBlob$8.preProcessResponse(CloudBlob.java:1238)
	at com.microsoft.azure.storage.blob.CloudBlob$8.preProcessResponse(CloudBlob.java:1204)
	at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:139)
	at com.microsoft.azure.storage.blob.CloudBlob.downloadAttributes(CloudBlob.java:1197)
	at com.microsoft.azure.storage.blob.CloudBlob.downloadAttributes(CloudBlob.java:1164)
	at org.apache.jackrabbit.oak.segment.azure.LoopFailTest.test(LoopFailTest.java:28)

Unable to set CORS rules programatically

In order to push to a storage queue from another origin, i need to add the appropriate CORS rule. At start up i try to set this programatically for the queue service....

  const properties = {
    Cors: {
      CorsRule: [{
        AllowedOrigins: ['*'],
        AllowedMethods: ['POST', 'GET', 'HEAD', 'PUT'],
        AllowedHeaders: ['*'],
        ExposedHeaders: ['*'],
        MaxAgeInSeconds: 3600
      }]
    }
  }

  qService.setServiceProperties(properties, function (error) {
    if (error) {
      throw error
    }
  })

This fails with the following error....

StorageError: Not Implemented yet.
    at Function.StorageServiceClient._normalizeError (~/dev/_spikes-poc/sas/server/node_modules/azure-storage/lib/common/services/storageserviceclient.js:1160:12)
    at QueueService.StorageServiceClient._processResponse (~/dev/_spikes-poc/sas/server/node_modules/azure-storage/lib/common/services/storageserviceclient.js:744:50)
    at Request.processResponseCallback [as _callback] (~/dev/_spikes-poc/sas/server/node_modules/azure-storage/lib/common/services/storageserviceclient.js:317:37)
    at Request.self.callback (~/dev/_spikes-poc/sas/server/node_modules/request/request.js:185:22)
    at emitTwo (events.js:126:13)
    at Request.emit (events.js:214:7)
    at Request.<anonymous> (~/dev/_spikes-poc/sas/server/node_modules/request/request.js:1157:10)
    at emitOne (events.js:116:13)
    at Request.emit (events.js:211:7)
    at IncomingMessage.<anonymous> (~/dev/_spikes-poc/sas/server/node_modules/request/request.js:1079:12)

I also tried hitting the associated rest endpoints, but get 501 not implemented...

GET /devstoreaccount1?restype=service&comp=properties HTTP/1.1
501: Not Implemented

PUT /devstoreaccount1?restype=service&comp=properties HTTP/1.1
501: Not Implemented

Is there another convention for setting CORS rules in Azurite?

Test Failing : Azure-Storage-Node - BlobContainer - createContainer - should work when the headers are set to string type:

Failing test case:
Under : ./externaltests/azure-storage-node/test/services/blob/blobservice-container-tests.js:201:18

 // creating again will result in a duplicate error
        blobService.createContainer(containerName, function (createError2, container2) {
          assert.equal(createError2.code, Constants.BlobErrorCodeStrings.CONTAINER_ALREADY_EXISTS);
          assert.equal(container2, null);
          blobService.removeAllListeners('sendingRequestEvent');

azure-storage-node tests
base.js:266
BlobContainer
createContainer
should work when the headers are set to string type:
Uncaught AssertionError [ERR_ASSERTION]: undefined == 'ContainerAlreadyExists'
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\test\services\blob\blobservice-container-tests.js:201:18
at finalCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:568:7)
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\filters\retrypolicyfilter.js:189:13
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:801:17
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:1014:11
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:800:15
at processResponseCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:571:5)
at Request.processResponseCallback [as _callback] (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:329:13)
at Request.self.callback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:185:22)
at Request. (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1161:10)
at IncomingMessage. (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1083:12)
at endReadableNT (_stream_readable.js:1045:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)

Docker: console not functioning

When I attempt to connect to console, I get this error message:

OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown

Maybe that's deliberate?

Thanks!

Table operation errors

I’ve been playing with Azurite some, and there appears there may be a problem with it’s tables implementation. Regardless of platform, I get errors when trying to add entities, delete entities, and delete tables. Some details:

  • We use azure-storage-node for sending requests.
  • We expect a certain result object when adding entities succeeds. However, we are getting an empty result object. This result object is supposed to provide information about the new entity, such as an etag.
  • When attempting to delete entities or tables, we get errors like the following (may be related to #17, #19):
POST /devstoreaccount1/test01 204 2.216 ms - -
GET /devstoreaccount1/test01?%24top=1000 200 1.560 ms - 575
GET /devstoreaccount1/test01?%24top=1000 200 0.747 ms - 575
POST /devstoreaccount1/$batch 500 1.124 ms - 45
**PANIC** Something unexpected happened! Table Storage Emulator may be in an inconsistent state!
TypeError: Cannot read property 'tableName' of undefined
    at BbPromise.try (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\lib\middleware\table\validation.js:24:48)
    at tryCatcher (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\util.js:16:23)
    at Function.Promise.attempt.Promise.try (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\method.js:39:29)
    at module.exports (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\lib\middleware\table\validation.js:18:18)
    at Layer.handle [as handle_request] (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\express\lib\router\layer.js:95:5)
    at trim_prefix (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\express\lib\router\index.js:317:13)
    at C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\express\lib\router\index.js:284:7
    at Function.process_params (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\express\lib\router\index.js:335:12)
    at next (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\express\lib\router\index.js:275:10)
    at C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\body-parser\lib\read.js:130:5
    at invokeCallback (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\raw-body\index.js:224:16)
    at done (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\raw-body\index.js:213:7)
    at IncomingMessage.onEnd (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\raw-body\index.js:273:7)
    at emitNone (events.js:106:13)
   at IncomingMessage.emit (events.js:208:7)
    at endReadableNT (_stream_readable.js:1055:12)
GET /devstoreaccount1/test01?%24top=1000 200 0.869 ms - 575

EACCES error on startup

Azurite Version: 2.6.5
OS Version: Windows 10

I get a continuous stream of error outputs when running Azurite from a PowerShell prompt (see below). However, if I run PowerShell as administrator and allow Node.js through the firewall, everything appears to work normally.

Sounds like a potential security risk.

PS> azurite

 _______                   _
(_______)                 (_)  _
 _______ _____ _   _  ____ _ _| |_ _____
|  ___  (___  ) | | |/ ___) (_   _) ___ |
| |   | |/ __/| |_| | |   | | | |_| ____|
|_|   |_(_____)____/|_|   |_| \__)_____)

Azurite, Version 2.6.5
A lightweight server clone of Azure Storage

events.js:183
      throw er; // Unhandled 'error' event
      ^

Error: listen EACCES 0.0.0.0:10002
    at Object._errnoException (util.js:1022:11)
    at _exceptionWithHostPort (util.js:1044:20)
    at Server.setupListenHandle [as _listen2] (net.js:1334:19)
    at listenInCluster (net.js:1392:12)
    at Server.listen (net.js:1476:7)
    at Function.listen (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\express\lib\application.js:618:24)
    at env.init.then.then (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\lib\AzuriteTable.js:52:35)
    at tryCatcher (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\util.js:16:23)
    at Promise._settlePromiseFromHandler (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\promise.js:512:31)
    at Promise._settlePromise (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\promise.js:569:18)
    at Promise._settlePromise0 (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\promise.js:614:10)
    at Promise._settlePromises (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\promise.js:693:18)
    at Promise._fulfill (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\promise.js:638:18)
    at C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\nodeback.js:42:21
    at Timeout._onTimeout (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\lokijs\src\lokijs.js:2633:15)
    at ontimeout (timers.js:475:11)
events.js:183
      throw er; // Unhandled 'error' event
      ^

Error: listen EACCES 0.0.0.0:10001
    at Object._errnoException (util.js:1022:11)
    at _exceptionWithHostPort (util.js:1044:20)
    at Server.setupListenHandle [as _listen2] (net.js:1334:19)
    at listenInCluster (net.js:1392:12)
    at Server.listen (net.js:1476:7)
    at Function.listen (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\express\lib\application.js:618:24)
    at env.init.then (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\lib\AzuriteQueue.js:48:35)
    at tryCatcher (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\util.js:16:23)
    at Promise._settlePromiseFromHandler (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\promise.js:512:31)
    at Promise._settlePromise (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\promise.js:569:18)
    at Promise._settlePromise0 (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\promise.js:614:10)
    at Promise._settlePromises (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\promise.js:693:18)
    at Promise._fulfill (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\promise.js:638:18)
    at C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\bluebird\js\release\nodeback.js:42:21
    at xfs.stat (C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\fs-extra\lib\mkdirs\mkdirs.js:56:16)
    at C:\Users\cralvord\AppData\Roaming\npm\node_modules\azurite\node_modules\graceful-fs\polyfills.js:287:18
events.js:183
      throw er; // Unhandled 'error' event
      ^
...

I noticed that if I run PowerShell as administrator and allow Node.js through the firewall, everything appears to work normally.

Copied blob has incorrect content type

After copying a blob, the copy has a content type of "application/octet-stream" instead of whatever was on the source blob.

Example powershell code to reproduce below.
When it runs against a real Azure storage account, the copy's resultant content type is "image/png" as expected.

New-Item "emptyfile.zzz"

#Upload as blob
$ctx = New-AzureStorageContext -Local
$container = New-AzureStorageContainer "mime-bug-repro" -Context $ctx
$originalBlob = Set-AzureStorageBlobContent -Blob "emptyfile.zzz" -Container $container.Name -File "emptyfile.zzz" -BlobType Block -Properties @{"ContentType" = "image/png"} -Context $ctx

#Initiate copy
$copyJob = Start-AzureStorageBlobCopy -SrcBlob $originalBlob.Name -SrcContainer $container.Name -DestBlob "copy-of-emptyfile.zzz" -DestContainer $container.Name -Context $ctx -DestContext $ctx
#Wait for copy.
while(($copyJob | Get-AzureStorageBlobCopyState).Status -ne "Success")
{
    sleep 5
}

#Get copy
$copiedBlob = Get-AzureStorageBlob -Container $container.Name -Blob "copy-of-emptyfile.zzz" -Context $ctx 

#Check ContentType
"The content type of the copy is: $($copiedBlob.ContentType)"

#Clean up file and container
$container | Remove-AzureStorageContainer -Force
Remove-Item "emptyfile.zzz"

Unable to connect Azurite using azure javascript library

Hi,

I am trying to connect azurite using azure javascript library.

When I make a get request to any container, the browser tries to send preflight request which is failed with error code 501-not implemented. Even when I make a options request using fiddler, it fails with same error.

What configuration I need to do to make it work.

Help request: How to setup Azurite HA?

I was wondering that how others have been solved high availability on their Azurite deployments?

Old skool way mapping storage from NAS to two nodes most probably works but then storage system it selves is SPOF.

On Azure they are using data replication but which is good/tested option for it wilt Azurite?

Test Failing : Azure-Storage-Node - BlobContainer - setContainerAcl - should work with signed identifiers

Failing test case:
Under : ./externaltests/azure-storage-node/test/services/blob/blobservice-container-tests.js:634:01

Assuming issue is generated here, either in how Azurite writes or return the data:

var options = {publicAccessLevel: BlobUtilities.BlobContainerPublicAccessType.OFF};
      blobService.setContainerAcl(containerName, signedIdentifiers, options, function (setAclError, setAclContainer, setAclResponse) {
        assert.equal(setAclError, null);
        assert.notEqual(setAclContainer, null);
        assert.ok(setAclResponse.isSuccessful);
        setTimeout(function () {
          blobService.getContainerAcl(containerName, function (getAclError, containerAcl, getAclResponse) {
            assert.equal(getAclError, null);
            assert.notEqual(containerAcl, null);
            assert.notEqual(getAclResponse, null);

            if (getAclResponse) {
              assert.equal(getAclResponse.isSuccessful, true);
            }

            assert.equal(containerAcl.signedIdentifiers.id1.Start.getTime(), new Date('2009-10-10T00:00:00.123Z').getTime());

azure-storage-node tests
base.js:266
BlobContainer
setContainerAcl
should work with signed identifiers:
Uncaught TypeError: Cannot read property 'split' of undefined
at Object.exports.parse (E:\repo\azurite_pdressel\Azurite\externaltests\azure-storage-node\lib\common\util\iso8061date.js:48:22)
at E:\repo\azurite_pdressel\Azurite\externaltests\azure-storage-node\lib\common\models\aclresult.js:105:44
at Array.forEach ()
at Object.exports.parse (E:\repo\azurite_pdressel\Azurite\externaltests\azure-storage-node\lib\common\models\aclresult.js:101:26)
at processResponseCallback (E:\repo\azurite_pdressel\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:874:68)
at Request.processResponseCallback [as _callback] (E:\repo\azurite_pdressel\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:329:13)
at Request.self.callback (E:\repo\azurite_pdressel\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:185:22)
at Request. (E:\repo\azurite_pdressel\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1161:10)
at IncomingMessage. (E:\repo\azurite_pdressel\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1083:12)
at endReadableNT (_stream_readable.js:1045:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)

Test Failing : Azure-Storage-Node - BlobContainer - setContainerAcl - should work with policies

Failing test case:
Under : ./externaltests/azure-storage-node/test/services/blob/blobservice-container-tests.js:586:01

Assuming that it is caused by either failing set or get container ACL code...

blobService.setContainerAcl(containerName, signedIdentifiers, options, function (setAclError, setAclContainer1, setResponse1) {
        assert.equal(setAclError, null);
        assert.notEqual(setAclContainer1, null);
        assert.ok(setResponse1.isSuccessful);
        
        setTimeout(function () {
          blobService.getContainerAcl(containerName, function (getAclError, getAclContainer1, getResponse1) {
            assert.equal(getAclError, null);
            assert.notEqual(getAclContainer1, null);
            assert.equal(getAclContainer1.publicAccessLevel, BlobUtilities.BlobContainerPublicAccessType.BLOB);
            assert.equal(getAclContainer1.signedIdentifiers.readwrite.Expiry.getTime(), readWriteExpiryDate.getTime());
            assert.ok(getResponse1.isSuccessful);
            
            options.publicAccessLevel = BlobUtilities.BlobContainerPublicAccessType.CONTAINER;
            blobService.setContainerAcl(containerName, null, options, function (setAclError2, setAclContainer2, setResponse2) {
              assert.equal(setAclError2, null);
              assert.notEqual(setAclContainer2, null);
              assert.ok(setResponse2.isSuccessful);

azure-storage-node tests
base.js:266
BlobContainer
setContainerAcl
should work with policies:
Uncaught TypeError: Cannot read property 'split' of undefined
at Object.exports.parse (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\util\iso8061date.js:48:22)
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\models\aclresult.js:105:44
at Array.forEach ()
at Object.exports.parse (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\models\aclresult.js:101:26)
at processResponseCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:874:68)
at Request.processResponseCallback [as _callback] (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:329:13)
at Request.self.callback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:185:22)
at Request. (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1161:10)
at IncomingMessage. (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1083:12)
at endReadableNT (_stream_readable.js:1045:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)

Docker image on Windows

I use AppVeyor as my build server and they only support Windows images (this week they soft released a paid upgrade to support Windows and Linux, but I don't want to pay extra just to use Azurite for my tests).

Would it be possible to publish both a Linux and Windows based docker image?

For now my build scripts check to see if they're running on AppVeyor and if so start up the Windows storage emulator instead of setting up Azurite. I'd prefer to run the same for dev & ci though.

I tried using the node package instead of docker but when you run that it blocks and the build never completes. Docker seems to be the best solution for both local dev & ci.

Test Failing : Azure-Storage-Node - BlobContainer - createContainerIfNotExists - should create a container if not exists:

Failing test case:
Under : ./externaltests/azure-storage-node/test/services/blob/blobservice-container-tests.js:236:24

blobService.createContainerIfNotExists(containerName, function (createError3) {
                assert.notEqual(createError3, null);
                assert.equal(createError3.code, 'ContainerBeingDeleted');
                done();
              });

azure-storage-node tests
base.js:266
BlobContainer
createContainerIfNotExists
should create a container if not exists:
Uncaught AssertionError [ERR_ASSERTION]: null != null
+ expected - actual
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\test\services\blob\blobservice-container-tests.js:236:24
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:642:9
at finalCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:568:7)
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\filters\retrypolicyfilter.js:189:13
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:801:17
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:1014:11
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:800:15
at processResponseCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:571:5)
at Request.processResponseCallback [as _callback] (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:329:13)
at Request.self.callback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:185:22)
at Request. (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1161:10)
at IncomingMessage. (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1083:12)
at endReadableNT (_stream_readable.js:1045:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)

Blob container no longer accessible

Azurite 2.6.5
OSX 10.13.3

Summary: Blob container which was once accessible from both python drivers (from a django app) as well as the azure storage explorer is now throwing 404s for images previously accessible. Also blob container itself doesn't even show up in the Storage Explorer.

When Azurite is running the requests to the container now 404.

GET /devstoreaccount1/test/images/product-picture-system-overview-1200x900_P2Xd.height-150.png 404 0.807 ms - 131
GET /devstoreaccount1/test/images/buddy-logo.original.png 404 0.752 ms - 131
GET /devstoreaccount1/test/images/buddy_bg.original.png 404 0.526 ms - 131

The Azure Storage explorer which once worked now does not show the container.
In the folder I use to store data I can see the various json configs. as well as the __blobstorage__ folder with my files in it. The __azurite_db_blob__.json does indeed have my container config.

{
    "filename": "azureStorage/__azurite_db_blob__.json",
    "collections": [{
        "name": "Containers",
        "data": [{
            "metaProps": {},
            "entityType": "Container",
            "leaseState": "available",
            "access": "private",
            "name": "test",
            "etag": "hofpfmX+/GTL3Uk/7EKWIm16ETU",
            "meta": {
                "revision": 0,
                "created": 1519943461794,
                "version": 0
            },
            "$loki": 1
        }],
        "idIndex": [1],
        "binaryIndices": {},
        "constraints": null,
        "uniqueNames": [],
        "transforms": {},
        "objType": "Containers",
        "dirty": false,
        "cachedIndex": null,
        "cachedBinaryIndex": null,
        "cachedData": null,
        "adaptiveBinaryIndices": true,
        "transactional": false,
        "cloneObjects": false,
        "cloneMethod": "parse-stringify",
        "asyncListeners": false,
        "disableMeta": false,
        "disableChangesApi": true,
        "disableDeltaChangesApi": true,
        "autoupdate": false,
        "serializableIndices": true,
        "ttl": null,
        "maxId": 1,
        "DynamicViews": [],
        "events": {
            "insert": [null],
            "update": [null],
            "pre-insert": [],
            "pre-update": [],
            "close": [],
            "flushbuffer": [],
            "error": [],
            "delete": [null],
            "warning": [null]
        },
        "changes": []
    }, {
        "name": "ServiceProperties",
        "data": [],
        "idIndex": [],
        "binaryIndices": {},
        "constraints": null,
        "uniqueNames": [],
        "transforms": {},
        "objType": "ServiceProperties",
        "dirty": false,
        "cachedIndex": null,
        "cachedBinaryIndex": null,
        "cachedData": null,
        "adaptiveBinaryIndices": true,
        "transactional": false,
        "cloneObjects": false,
        "cloneMethod": "parse-stringify",
        "asyncListeners": false,
        "disableMeta": false,
        "disableChangesApi": true,
        "disableDeltaChangesApi": true,
        "autoupdate": false,
        "serializableIndices": true,
        "ttl": null,
        "maxId": 0,
        "DynamicViews": [],
        "events": {
            "insert": [null],
            "update": [null],
            "pre-insert": [],
            "pre-update": [],
            "close": [],
            "flushbuffer": [],
            "error": [],
            "delete": [null],
            "warning": [null]
        },
        "changes": []
    }, {
        "name": "test",
        "data": [{
            "metaProps": {},
            "entityType": "BlockBlob",
            "leaseState": "available",
            "access": "private",
            "name": "original_images/bundle-enterprise.png",
            "id": "QXRlc3RvcmlnaW5hbF9pbWFnZXMvYnVuZGxlLWVudGVycHJpc2UucG5n",
            "uri": "azureStorage/__blobstorage__/zZYc_Gf94S_K5DBYFoUEcw9qdT8=",
            "snapshot": false,
            "committed": true,
            "size": 28311,
            "etag": "cy4zKx42DoteaPpGlJb9eQEsDO8",
            "contentType": "image/png",
            "meta": {
                "revision": 1,
                "created": 1519943666535,
                "version": 0,
                "updated": 1519943666538
            },
            "$loki": 1
        },

No idea how to troubleshoot this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.