marklogic / mlxprs Goto Github PK
View Code? Open in Web Editor NEWMarkLogic Extension for Visual Studio Code
Home Page: https://marklogic.github.io/mlxprs/
License: Other
MarkLogic Extension for Visual Studio Code
Home Page: https://marklogic.github.io/mlxprs/
License: Other
https://developer.marklogic.com/code/mlxprs/
An EditorConfig file makes it easier for other people working with your code to match your preferences on formatting settings. Example
As a developer, I want to be able to inspect the logs on a remote DHS instance so that I can get access this valuable debugging info more readily during development in VS Code.
Can we leverage the new auto-attach feature of the new debugger?
Last-mile integration testing of the extension. This is complementary to the extensive back-end testing that happen as part of the MarkLogic release cycle.
The minimum we should aim for is a single command that can run all of the tests in one shot. Next step is to hook up CI/CD so that it happens automatically on PRs and releases. (I’ll create a separate issue for that.)
If there are problems with the test library we‘re using we should investigate how others handle this, for example, the extensions that Microsoft delivers.
Currently the debug adapter does not work if user have no access for the imported modules. The plan is to ask MarkLogic server to stream back the module file and display it as a local temporary file for user to view and debug
User credentials should be stored in plain text file.
A VS Code update today announced a new JavaScript debugger. It’s currently in preview. We should verify that our extension works before it goes live.
Would be nice to unify settings for javascript debugger with existing mlxpr. Common info like hostname, servername, credential etc should only be configured in one place?
Same as other community projects.
I have mlxprs installed with the default settings.
I would like the ability to override those settings in my javascript file so I can better stay in my editor and support multiple projects.
Therefore my database
settings is set to Documents
.
My code could have the following comment block
/* Settings:mlxprs
mlxprs.database='data-hub';
mlxprs.username='some-disadvantaged-user';
mlxprs.password='shillingForADaysWork';
*/
I can't pin down the exact way to reproduce this, but rather frequently I find that the result of a JS eval does not appear to update the results pane, even after saving the sjs file with changes
I have a search that returns a few documents.
The first time I execute the search I get something like this:
The next time I execute I get something like this:
If I keep executing the search, the JSON formatting will toggle back and forth between formatted and not formatted.
With much larger results I have seen where spaces are injected inline and the JSON is partially corrupted. This one is harder to reproduce.
please add XQuery debugging feature to mlxprs
Input to connectServer
is the name of an app server in the MarkLogic configuration. In DHS the app servers are pre-defined and static. Thus, we should be able to use this as the default value. On-prem will have to change this to reflect their custom app server, just as today.
Update the README with a getting started guide quickly covering the “why” and the “how”, including the configuration options and a screenshot showing the debugger in action. Also include a link to MarkLogic documentation about required privileges.
Attach case needs an app server to be connected to jsdebugger first. Only after that, requests launched on the server can be paused and debugged. It does not apply to any running request, but only to the ones launched after the server is connected
Our jsdebugger does not work fully with ES2015 modules (i.e. import
syntax).
import |
require() |
|
---|---|---|
Eval/launch | partial | ✔︎ |
Attach | partial | ✔︎ |
From @alan-johnson
I have a Mac with a nightly build of 10.0. I've checked the debugger setting for the app server (data-hub-FINAL) and it is set to true. I have the mlxprs extension installed from the marketplace. I downloaded the project, took the "JSDebugger" folder and put inside the "Client" folder for the mlxprs extension. I set up the launch.json settings as per the example. When I select the VS Code debug then pick the "Launch Debug Request" configuration, I get a message box with 'Configured debug type "ml-jsdebugger" is not supported.' My settings point to my data-hub-FINAL database with admin credentials. I can run code with the mlxprs extension just fine. Just not debug.
Do I have to uninstall the mlxprs extension, build from the source then install that package?
As a MarkLogic Developer, I wan to be able to debug my Data Services deployed to a DHS instance in order to diagnose a problem or understand what my (or someone else’s) code is doing.
Expected interaction:
GOTO
step 2As a developer tuning queries, I’d like a simpler way to interpret what the database is doing under-the-covers to help me optimize my code.
MarkLogic provides various query plan reports. They provide low-level details of resource usage and optimization strategies. This information is precise and expansive, but difficult for a human to interpret, especially one without deep MarkLogic expertise. We should provide a visual summary of this information that progressively reveals the details for learning and tuning.
connectServer
idempotent. Is there a way to show “connection” status visually, so you know whether you need to connect?And then if/when we implement XQuery debugging, s/JavaScript/XQuery/
above.
Can you please add the marketplace landing page to the top-level config of the GitHub project?
A VS Code update today announced a new JavaScript debugger. One of the new features is visual profiling. This would definitely provide huge benefits to MarkLogic developers. @zzzwan, can you take a quick look at what we’d need to do to enable this?
XQuery basic/fundamental code completion is not working. It means i am not able to get code completion for if...then...else, return, for, let....etc
Bug in evaluating boolean expression. Step to reproduce:
in debug console, let a = 10, a===20
If you start working in JavaScript without going to into XQuery mode, you may not get SJS autocompletion hints. This is because the language server only seems to get activated by switching to the "XQuery (ML)" language mode.
You would expect some helpful SJS suggestions and documentation snippets about cts.aggregate()
, but instead you only see other non-ML-related suggestions.
Workaround is to switch to the "XQuery (ML)" language mode at least one time in the editor, and then switch back to JS. This will activate the language server, and suggestions should then work.
When, for example, you fire a doc()[1]
query to a modules database to get an xquery file as a result, you would expect to see the text document laid out as readable source code. Instead, you get a bunch of numbers displayed, apparently representing the character codes of the document retrieved.
Need to fix the results handler to interpret/parse these kind of results and make them useful.
What do we expect the behavior of the debugger to be when trying to debug a request that drops into XQuery? For example, for historical reasons, Data Hub flows and REST Client API Resource Service Extensions are bootstrapped from XQuery, even if the user serviceable code is written in JavaScript.
I wouldn’t expect the debugger to seamlessly cross over this boundary today. What are our options?
The above is ordered by desirability, descending, but also by complexity/infeasibility. Are there other options? What am I missing?
In the near term and also more generally, I’d suggest extracting the JavaScript from a Data Hub step or a REST Client API Extension into its own library module and debugging it from a unit test (or bootstrap main module). I don’t think we want to encourage debugging framework code in the first place.
A CONTRIBUTING.md file in the project's home directory tells people how they can help out. Example
There are likely situations where the user will not be running the SJS debug server on port 8002, or it will be re-mapped to a different port for whatever reason. This should be configurable, but still default to 8002.
As an advanced user, I want to try out builds before they’re officially released so that I can provide feedback to the development team.
Publish pre-release builds on https://github.com/mikrovvelle/mlxprs/releases along with instructions for manual installation. This should also include instructions for getting and running the required version of a pre-release version of MarkLogic, if applicable.
Planning to add JS debugger functionality into mlxprs.
My current plan is to put debugger related files inside a folder named "JSDebugger" under folder client.
Existing files that will be affected are:
client/extension.ts -- In activate() functions, push debugConfigurationProvider and DebugAdapterDescriptorFactory.
webpack.config.js -- adds one more entry JSDebug for the debug adapter.
package.json -- adds new contributes "breakpoints" and "debugger" and dependencies.
Please advise on where to put the debugger files in the project.
For launch case, the debug eval options should not be url encoded in the rest Call.
s/2017/2020/
30-second screencast of debugging a Data Service. Would be a nice-to-have replacement for a screenshot in #31.
To build from source, you currently need to run npm install
in both the project root and the ./server
folder. You should only need to run it once from the root.
JS debug adapter needs ssl support
As new stuff adds in, it would be nice to refactor the js debugger code
The combination of dynamic types and wonky type introspection capabilities in JavaScript make understanding and debugging code more difficult than it should be. (I’m looking at you constructor.name
, Object.prototype.toString.call()
, and Symbol.hasInstance
.) MarkLogic compounds this with own type hierarchies for things like Sequence
and Node
.
I’ve put together a proof-of-concept visualization that shows any object’s complete type hierarchy, including everything contributed by its prototype chain. This gives a view of what the object is and the available properties/methods.
I’d like to propose an enhancement to the eval capability in mlxprs to provide this visualization optionally out-of-the-box, probably as a separate command.
Ideally, this proposal would come in the form of a pull request with a PoC implementation. However, I wanted to get some feedback before I dug in.
My current implementation is a bit of a hack. It does a nested eval
and calls a describe()
function that’s injected for each request in the E-node to build out a JSON report of the type hierarchy. The browser formats that report as HTML.
As a user, when I execute the connectServer command, I should be presented with a list of app servers available in my MarkLogic instance.
Ideally, the list should be responsive like other command-palette interactions in VS Code. E.g. if I want "data-hub-STAGING", I should be able to type "s t a enter" to quickly connect.
Currently, I have to type out exactly "data-hub-STAGING" with no auto-complete or other UI assistance. It's just, too daunting.
./gradlew mlDeploy
./gradlew -i mlWatch
{
"type": "ml-jsdebugger",
"request": "attach",
"name": "Attach to Debug Request",
"path": "${workspaceFolder}/src/main/ml-modules/root",
"debugServerName": "data-services-example"
}
./gradlew -i test
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.