Giter Club home page Giter Club logo

mwt-ds's Introduction

mwt-ds's People

Contributors

abgoswam avatar alexeyrodriguez avatar annachoromanska avatar ataymano avatar cheng-tan avatar chrisquirk avatar danmelamed avatar eisber avatar gparker avatar hal3 avatar jhofman avatar johnlangford avatar kaiweichang avatar lhoang29 avatar lokitoth avatar marco-rossi29 avatar martinthomas avatar nicknussbaum avatar petricek avatar pierce1987 avatar pmineiro avatar reunanen avatar rukshanb avatar sam-s avatar sidsen avatar someben avatar stross avatar syhw avatar wfenchel avatar yarikoptic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mwt-ds's Issues

join service deployment failure

In dmdp7, the eval query failed with the message

"code": "Conflict",
"message": "{\r\n "code": "Conflict",\r\n "message": "The Stream Analytics job is not in a valid state to perform this operation.",\r\n "details": {\r\n "code": "409",\r\n "message": "The Stream Analytics job is not in a valid state to perform this operation.",\r\n "correlationId": "55c5746d-0a39-415d-b110-0b8527883cda",\r\n "requestId": "911a11d3-66f2-4d4c-a738-6ae18efe7e04"\r\n }\r\n}"

Appveyor build is failing

The appveyor build is failing due to the web API tests. @danmelamed, could you have a look? Is it possible to fix it so that it could run with a new deployment? We could also ignore these tests but I think they'd be useful to have.

Comments on Deploy-DS Wiki page

A few comments re https://github.com/Microsoft/mwt-ds/wiki/Deploy-a-Decision-Service. (but I'd rather not unilaterally decide on the changes.)

• “blade”: azure-specific jargon? Replace with “panel”?
• “tooltip button” -> “tooltip button (i)” (i.e., add that symbol)
• Comment that in the “create new” option, the required line is a name. Maybe also suggest a good naming scheme if one is trying multiple deployments?
• In the “legal terms”, the “accept” button is called “purchase”, which may be confusing to some. So perhaps comment that if your subscription is free, clicking “purchase” will not actually spend money?
• Since not all locations seem to work, maybe advice to use US only locations?

Error: error Internal Server Error on Test Drive

Happens when I click "Reset Model" while it says "Please wait as trainer has not started yet." Obviously I'm doing something I'm not supposed to, but it would still be nice to have more graceful/informative error handling.

Deployment failure

For the management center, I got the following fail in foo-2:

{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "DeploymentFailed",
"message": "At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.",
"details": [
{
"code": "Conflict",
"message": "{\r\n "status": "Failed",\r\n "error": {\r\n "code": "ResourceDeploymentFailure",\r\n "message": "The resource operation completed with terminal provisioning state 'Failed'."\r\n }\r\n}"
}
]
}
]
}
}

Improvements to clarity of test drive plot

Some concrete-ish suggestions:

  1. "Show results for the past" doesn't make sense. I expect to see an x-axis that spans exactly that time. We need to somehow convey that this is a sliding (tumbling?) window computation. We should say something like: "Data points are updated every XX. Each data point reflects data collected from a trailing window of YY". Then we might want to put a link to ASA's tumbling window definition.
  2. Something to consider: we might just set the window to 6days or whatever the maximum is and not give the user the option to change it.

error after viewing available locations

On the staging site, after viewing the allowed locations, I see no way to proceed. Clicking my browser's back button, I get the error below.

Server Error in '/' Application.

IDX10311: RequireNonce is 'true' (default) but validationContext.Nonce is null. A nonce cannot be validated. If you don't need to check the nonce, set OpenIdConnectProtocolValidator.RequireNonce to 'false'.

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details: Microsoft.IdentityModel.Protocols.OpenIdConnectProtocolInvalidNonceException: IDX10311: RequireNonce is 'true' (default) but validationContext.Nonce is null. A nonce cannot be validated. If you don't need to check the nonce, set OpenIdConnectProtocolValidator.RequireNonce to 'false'.

Source Error:

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

Stack Trace:

[OpenIdConnectProtocolInvalidNonceException: IDX10311: RequireNonce is 'true' (default) but validationContext.Nonce is null. A nonce cannot be validated. If you don't need to check the nonce, set OpenIdConnectProtocolValidator.RequireNonce to 'false'.]
Microsoft.IdentityModel.Protocols.OpenIdConnectProtocolValidator.ValidateNonce(JwtSecurityToken jwt, OpenIdConnectProtocolValidationContext validationContext) +630
Microsoft.IdentityModel.Protocols.OpenIdConnectProtocolValidator.Validate(JwtSecurityToken jwt, OpenIdConnectProtocolValidationContext validationContext) +355
Microsoft.Owin.Security.OpenIdConnect.d__1a.MoveNext() +5307
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() +31
Microsoft.Owin.Security.OpenIdConnect.d__1a.MoveNext() +7388
System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) +13847892
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +61
Microsoft.Owin.Security.Infrastructure.d__0.MoveNext() +822
System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) +13847892
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +61
Microsoft.Owin.Security.Infrastructure.d__0.MoveNext() +333
System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) +13847892
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +61
Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.d__5.MoveNext() +202
System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) +13847892
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +61
Microsoft.Owin.Security.Infrastructure.d__0.MoveNext() +774
System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) +13847892
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +61
Microsoft.AspNet.Identity.Owin.d__0.MoveNext() +450
System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) +13847892
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +61
Microsoft.AspNet.Identity.Owin.d__0.MoveNext() +450
System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) +13847892
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +61
Microsoft.AspNet.Identity.Owin.d__0.MoveNext() +450
System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) +13847892
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +61
Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.d__5.MoveNext() +202
System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) +13847892
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +61
Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.d__2.MoveNext() +193
Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.StageAsyncResult.End(IAsyncResult ar) +96
System.Web.AsyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +363
System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +137

Reset model no longer works

Make a new deployment, visit the test drive and click Reset Model. Some debugging returns 500 Internal Server Error

Misleading claim

"Latest model obtained at: December 31st 0000, 7:00:00pm". Can we make this say something a bit more plausible?

EventHub deployment error

All event hubs failed to deploy in dmdp4, which differs from past deployments in that I bumped up the MB/s param to 20. Audit log revealed the following:

"properties": {
    "message Time": "7/1/2016 9:03:33 PM UTC",
    "error": "Exceeded the maximum number of allowed receivers per partition in a consumer group which is 5 TrackingId:e460a788-e7ce-4fff-ad6d-9ebb6a1991f3_B32, SystemTracker:sb-633gnx76jn2q2:eventhub:eval~32766, Timestamp:7/1/2016 9:03:27 PM Reference:98a61183-4639-4a2c-a80c-2730936e8673, TrackingId:3aaeaca7-1fb9-44b7-85ab-d6febdd18650_B32, SystemTracker:sb-633gnx76jn2q2:eventhub:eval~32766|$default, Timestamp:7/1/2016 9:03:27 PM\r\n",
    "message": "EventHubBasedInputQuotaExceededError errors are occuring too rapidly. They are being suppressed temporarily",
    "type": "EventHubBasedInputQuotaExceededError",
    "correlation ID": "da94edf7-a970-48f3-8a1d-21905c46eea8"

Auto-generate an email on successful deployment?

Can we auto-generate an email on successful deployment? Then we can include some useful links and/or advice to get people started (and we'll need to figure out what advice we want to give).

One specific thing: a link to MC would be useful. Current process to find it is a bit tedious, even when and if users realize that they need to look into FAQ.

Inconsistent status messages

Currently, the api test drive page can say both:
"Please wait the trainer has not started"
and "Last model obtained at...."

Deployment name conflict

dmdp2 failed with

Unable to edit or replace deployment 'Management_Center': previous deployment from '7/6/2016 8:21:16 PM' is still active (expiration time is '7/13/2016 8:21:15 PM'). Please see https://aka.ms/arm-deploy for usage details. (Code: DeploymentActive)

This is odd in several ways:

  • There are no errors in the audit log.
  • The timestamp is in the future
  • I'm certain that there is no such RG at the moment.
  • The name of the deployment seems too generic -- 'Management_Center'. Do we not add unique IDs to deployment names?
  • The RG thinks the deployment was successful. I can see the MC in my browser, but when I try to log in I get a runtime error.

Improvements to clarity of test drive UI

Some concrete-ish suggestions:

  1. The experimental unit duration is confusing: if you're idle for a while or if you refresh the page for some other reason, events will be completed without your reward input. I think we should bring back the timer and set the default to 15 seconds (10 seconds is too short).
  2. Related to the above, when the exp unit expires and you click thumbs up/down we are sending orphaned reward info. Instead, if we have a timer, we can force a ChooseAction refresh after the current decision's window expires.
  3. Let's pick a fixed duration for updating all text values on the page (# training examples, model timestamp, graph update time, etc.). I think every 10 seconds is good.
  4. Move "Success..." statement to the top. Move trainer and model messages below that and connect them somehow. For example, say "Online trainer OK. Total leaerned examples: XX (Last status update: [datetime])". Then perhaps on a separate line: "Latest model deployed at: [datetime]".
  5. Where do we say that clicks get a reward of "1" and non-clicks a reward of "0"? I only see the keyboard shortcuts. Can we make this more explicit?

Management_Center deployment failure

For 2/6 tests I see a failure to deploy the management_center. When I look into the audit logs I see the helpful message: "The resource operation completed with terminal provisioning state 'Failed'." This if for foo-2 and foo-6.

API test drive UI corrections

  • The x-axis for all window sizes looked consistent except for the 1m window size. Can we in general make the plot axis ranges consistent no matter what window size you use?
  • The font size for “Success…” and “Trainer..” seem to have decreased – can we increase them. Same goes for “Each data point…” and “Graph updated at…”.
  • We need more space between the drop down box and “Each data point”. Also replace “Window Size” -> “Window size” to be consistent with other drop down.
  • Unbold the window time as well as the “Graph updated” sentence.
  • “Each data point reflects evaluation result averaged over a hopping window of size 1h.” -> “Each data point is computed over a hopping window of size 1h”.
  • “Observered” is misspelled. Also, I don’t understand what “Observed” and “Counterfactual Policy” mean. Why is Counterfactual the only one with “Policy”?

API Test Drive graph time window > 1m leads to "no data"

After submitting about 40 examples in the Test Drive, the graph was still blank. I changed the "show results for the past:" menu down to 1m, and lines showed up. I changed it back to 1h, and the graph says "no data" again. It seems odd that changing that interval should lead to presence vs. absence of anything on the graph.

possibly new type of deployment failure

PDNE-dp-1 reports the MC as failing to deploy. Drilling down in the portal reveals no specific cause of failure. The audit logs report errors for Write Sourcecontrols and Write Deployments, but again with no specific reason that I can see.

Time Intervals

Long time intervals don't exist, and they should. Maybe do 1 minute, 1 hour, 1 day, and 1 week? (With a default of 1 week?)

wrong example count

In dmdp8, the "total learned examples" was 1 before I clicked anything. It climbed to 2 after a page refresh. This might be related to the issue the Kevin Gao reported in his email.

Empty model detected, skipping model update. (with softmax explorer)

image

VW arguments I use are: --cb_explore_adf --cb_type dr --softmax --lambda 10 -q ui
When I change the arguments to: --cb_explore_adf --cb_type dr --softmax --lambda 10
It again shows the same for different runs on the same deployment and waiting for minutes for the model to be learnt.

When I try to replicate it for the same simulation with the epsilon greedy setting i.e. VW params: --cb_explore_adf --epsilon 0.2 --cb_type dr -q ui

Seems like the softmax explorer doesn't work. I will also be trying the other exploration strategies that work with adf.

Strange warning

When you "Deploy to Azure" then wait around for several minutes, the bell shows a message: "RunFinished
in data factory dsdatafactory1458774893, validation for table joined-examples slice start time... finished at ... "

What does this mean? And where does it come from?

Feature Request: Provide a confidence measure associated with action

For either policy or ranking usage, it would be helpful to know the confidence associated with any particular suggested action. For example, it would be helpful to be able to distinguish between the following 2 cases:
a) the probability of achieving maximal reward from the first-ranked action >> probability of achieving maximal reward from the second-ranked action
b) the probability of achieving maximal reward from the first-ranked action =~ probability of achieving maximal reward from the second-ranked action
Furthermore, it would be useful to be able to know if a particular suggested action is skewed toward exploration and/or have the ability to prevent this on a per-request basis.

GenericTopSlotExplorer ExplorerDecision Exception

I have an app that uses decisionService.ChooseRanking with 81 actions, and theGenericTopSlotExplorer ExplorerDecision threw its "Probabilities must sum to one." exception as shown in this screenshot.

I put a red box around the "total -1f" in the watch window, which is the condition checked before the exception is thrown.

image

What do I do after the deployment tutorial?

We should have clear steps for what a user should do after going through the deployment tutorial. This is likely adding a link to the test drive tutorial at the end of the deployment tutorial

Opening "Deploy to Azure" tabs in background doesn't open our custom template

Here are the steps to reproduce the failure John initially reported. The key step is to use “middle-click” or “ctrl + click” to open the deploy-to-azure tab in the background. Specifically here’s what I did on IE:

  1. Open a fresh IE window and go to http://mwtds.azurewebsites.net/Home/Manifest. Left-click on “Deploy to Azure”. This opens a tab in the foreground and you eventually get to the New -> Custom deployment -> Parameters tabs (the two tabs to the right of the main left tab). This is the correct behavior.
  2. Repeat step 1 but use ctrl + click so the tab opens in the background. This should still yield correct behavior.
  3. Repeat step 1 but use ctrl + click to open two tabs simultaneously in the background. About half the time one of the tabs gets stuck in the “Dashboard” state. Spacing the clicks out a bit (so the first tab has a little time to load) usually causes the error to happen more reliably. Opening 3 tabs in the background simultaneously almost always results in the one or more of the tabs getting stuck.

If I run this on Chrome, I get similar behavior but slightly different:

  1. Same as above.
  2. Same as above, except you will see three tabs instead of two: the “New” tab opens up separately followed by the “Custom deployment” and “Parameters” tab. Also, it takes very long to load, as if being in the background slows down the sequence of loading: look at the tab’s title and you’ll see Microsoft Azure…Dashboard…New…Custom deployment…Parameters happening very slowly. All of this is inconsistent with what you see in step 1, but perhaps acceptable since we eventually get to our settings page as desired – and if at any point you bring the tab into the foreground the loading sequence speeds up to normal speed.
  3. Same as above except the error shows up more reliably with just two background tabs.

Initial examples lost

When I start entering information immediately after successful deployment and before the trainer is started the examples are apparently lost. For foo-7 and foo-9 in the current deployment, I put 20 or 30 examples into things initially. After the trainer started several minutes later it said "Trainer OK. Total learned examples: 2" and "Trainer OK. Total learned examples: 3".

Why is information being lost? And can we stop the information loss?

Online trainer failing to deploy on "US East" location

... with subscription “MWT Decision Support Test (pauloka)”.

... with error message of the form:
Unable to create domain name 'trainer-XXX': 'The location constraint is not valid'

... consistently: tried 4 times.

ClientDecisionServiceSample Exceptions

I spun up the Azure resources and got the API tutorial working. Next, I tried to run the ClientDecisionServiceSample examples. They gave me exceptions.

  1. Easy fix - I put a constructor on UserContext, so that Features wasn't null in the Sample.cs run.
    public UserContext()
    {
    Features = new Dictionary<string, float>();
    }

  2. Next, VWExplorer.cs was throwing exceptions in MapContext that I didn't yet figure out. If I had developerMode=true, then the exception happens on the Trace line. If developerMode=false, the vw.Predict call has this exception:

image

Notes:

  1. I think I set my SettingsBlobUri correct.
  2. I saw the code comments about ClientDecisionServiceSample needing --cb_explore 10 eventhough the default Azure deploy uses --cb_explore 2. I deployed both ways, and had this same problem.

Last question: Is --cb_explore really fixed at deploy time? I saw that I could change the VW settings in the Azure instance, but I wasn't sure if they'd take effect or not.

fresh build fails

I set my build config to debug/x64, but...

Error Metadata file 'C:\dev\Repos\mwt-ds\bin\x64\Debug\Microsoft.Research.MultiWorldTesting.ClientLibrary.dll' could not be found DecisionServicePrivateWeb C:\dev\Repos\mwt-ds\mc\CSC
Error Metadata file 'C:\dev\Repos\mwt-ds\bin\x64\Debug\Microsoft.Research.MultiWorldTesting.JoinUploader.dll' could not be found ds-provisioning C:\dev\Repos\mwt-ds\provisioning\test\CSC
Error Metadata file 'C:\dev\Repos\mwt-ds\bin\x64\Debug\Microsoft.Research.MultiWorldTesting.ClientLibrary.dll' could not be found ds-provisioning C:\dev\Repos\mwt-ds\provisioning\test\CSC

Restrictions on Azure subscriptions

Based on past conversations, it looks like the deployment will not succeed in all Azure regions. This can depend on the user's subscription (and maybe also the resources in the Decision Service?). We likely need to have a better understanding of this and give additional information in the deployment tutorial, such as which regions have been thoroughly tested.

"reset model" seems to have no effect

After entering a bunch of examples in API Test Drive, it says "total learned examples: 44". Clicking "reset model" and waiting a while, it still says 44. Clicking a few more examples only increases this number.

Comments on the "parameters" blade

Going one by one:

Vowpal Wabbit Switches:

  • is it switches (as in the title) or flags (as in explanation)? Let’s be consistent.
  • Maybe add “If not sure, go with the defaults” in explanation?
  • We can put a cheat sheet in the explanation, so why not?

Experimental unit duration

  • Reword explanation: “Max allowed #seconds to observe the reward after the decision is recorded.”

Initial exploration epsilon Sounds weird. Suggested rewording:

  • Title: “Initial exploration probability”
  • Explanation: Probability of exploration before the first model is trained.

MB/S Change to "Bandwidth required (MB/s)"

Model update interval Title suggests it is in time units, but explanation says it is in #examples. So which one is it? If it can be either, explanation should be reworded.

What do I do after the test drive?

We should have clear steps for what a user should do after the test drive tutorial. At this point, they had a chance to test the service and hopefully have an idea of what it does, but they need to learn more about how to use the decision service for a scenario they are interested in. Is the paper the best place to learn about this or is there a more succinct guide we can link to?

Issues with "Settings" dialog in MC

Major issues:

  • Some values (IDs, Keys and addresses) do not fit into the corresponding field, and cannot be viewed or copied in full. Which I assume is a bug.
  • Many of the settings are [probably] only for very advanced users. So perhaps it would be useful to label them as such and put them below the fold.
    • Specifically, I am not sure why anyone would want to use the following: Settings Address, Online Trainer, ASA Join Server, EH interaction string, EH observation string.

Minor issues:

  • AppID
    • AppID defaults to ResourceGroupName, and cannot be changed. Is this what we want? Then we may want to want people what whatever they choose for ResourceGroupName is going to be their AppID going forward.
    • Also, the explanation reads as if one can change the AppID (which would also change ResourceGroupName), which is not the case, right?
    • Point out that “Application” = instance of DS. So if they use multiple instances of DS for the same website, it is multiple “applications” from our PoV.
  • Azure Subscription ID:
    • this looks confusing, given that Azure subcrtiptions have names in plain text. Why not list the subscription name as well?
    • Reword explanation as “ID of the Azure subscription used for this deployment”.
  • AzureResourceGroup:
    • No way to see the explanation without clicking and opening a tab (with the resource group).
  • Application Insights:
    • The explanation talks about initializing the telemetry client, but it is not clear how to do that. Add more explanation, or maybe a link to FAQ? (Can we add URLs in these explanations?)
  • Setting blob address:
    • Sounds as “address for setting blob”. Rename to “Blob address for settings”? Even better, “Address for settings”, so that people don’t need to know what is “blob”?
    • If people need to know what is blob, then this should be briefly explained in the explanation.
  • Context type:
    • Rename to “Actions with features?” (or smth like that), and have two options, yes/no
    • In explanation, ADFs and “regular” features sounds mysterious. Say “Do actions come with features? Such features are then included in the context” (or smth like that).
  • Training frequency:
    • Currently the only option is “high”, whereas the explanation says that batch training is also available. So this is a little confusing.
    • Batch training is not done automatically, right? Then perhaps we should mention it.
  • Join timeout, Initial eps exploration: please see the “deploy DS blade” for better names and/or explanations.
  • VW arguments:
    • In explanation, seems useful to add smth like “Do not change unless you know what you are doing”, and add a pointer to the Wiki (wherever we talk about VW options).

API Test Drive ignores input

There seems to be a heisenbug where the API Test Drive ignores the input information. I can't trigger this reliable, but it just happened on 3 out of 5 "successful" deployments this morning. The signature of this bug is the message:

Trainer OK. Total learned examples:

where was 1 or 7. The graph is also messed up because of this. The relevant deployments are foo-9, foo-11, and foo-12 in our test account.

Trainer name in deployment tutorial

The deployment tutorial points the user to a “{RG}-trainer-” trainer in step 6, but the name is “trainer-{RG}”. It might be clearer to just say "trainer-{ResourceGroup}*".

Broken links in MC

  • Entry page, the link for “How do I find my password” does not go to an existing Wiki page.
  • In MC/settings, links next to:
    • Application Insights (I get “error retrieving data”)
    • ASA policy evaluation (I get “This asset was not found, it may have been deleted”)
    • ASA join server (I get “This asset was not found, it may have been deleted”)

Misleading message

When you deploy the decision service then, wait a long time, then go to the API test drive page it says: "Please wait as...". A polite person will wait resulting in nothing getting done.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.