Right now our primary directive is to make sure the tutorial always works for those trying out Redwood for the first time. If those users have a bad experience it's very hard to win their trust back to come and try out the framework again.
Right now the only way to do that is to go through parts of the tutorial manually that we think may be at risk when upgrading parts of the framework. Ideally this would all be automated through CI (GitHub Actions).
One theory we had would be to actually make the commands and code snippets in the tutorial "executable" in some way and then the CI could build the app by executing the commands and making the code changes. We would need something like Selenium or maybe just React Testing Library to actually access the site through a headless browser and make sure certain UI elements were present, received focus, etc.
My first thought was to add comments to the Markdown (which are ignored by renderers that convert Markdown to HTML) or rely on the type of the codeblock.
For executable terminal commands, we could simply look at the type of the codeblock and if it is terminal
then execute the command one after the other:
We'll use yarn to create the basic structure of our app:
```terminal
yarn create redwood-app ./redwoodblog
For code snippets it would be a little harder. Some snippets are the full content of the file, but others are only a portion, or highlight certain changes to certain lines. My thought was to include an actual diff in a comment before the code snippet. Then that diff can be applied to the file during the CI process. It's hard to show an example in actual markdown because it gets converted by GitHub, but here's a screenshot:
This would be easier to implement if there are Markdown lexers out there that turn comments into tokens you can actual iterate over, rather than ignore them. I've used the marked lexer but haven't tried it with comments. We currently use markdown-it to convert the tutorial itself into HTML, but there's no documentation that shows how to access the parser/lexer.
If there aren't any then I guess phase 1 could be looking for ```terminal
or blocks between <!--
and -->
.
Those two things should allow us to create an app along with all the code changes we go through in the tutorial. However when it comes to sections like "try filling out this form and notice the error message" it gets more difficult. I've never worked with Selenium with Node before, but looking through this page makes me think you could write directives as comments in Markdown. You would need associated test expectations as well. Depending on everyone's comfort with exec
maybe we just write out the code literally and exec()
it?
<!--
@driver.get 'http://bites.goodeggs.com/posts/selenium-webdriver-nodejs-tutorial/'
text = @driver.findElement(css: '.post .meta time').getText()
expect(text).to.eventually.equal 'December 30th, 2014'
-->
If exec()
is too scary maybe we create a separate tutorialTests.js
and can have comments in the Markdown that say which tests to execute in tutorialTests.js
:
Open up web/src/Routes.js
and take a look at the route that was created:
And then tutorialTest.js
is a whole suite of tests but they comment above lists which would should actually be run (this is just an example test from React Testing Library, I have no idea what kind of test would actually work with checking the route above):
import '@testing-library/jest-dom'
import React from 'react'
import {render, fireEvent, screen} from '@testing-library/react'
import HiddenMessage from '../hidden-message'
test('shows the children when the checkbox is checked', () => {
const testMessage = 'Test Message'
render(<HiddenMessage>{testMessage}</HiddenMessage>)
expect(screen.queryByText(testMessage)).toBeNull()
fireEvent.click(screen.getByLabelText(/show/i))
expect(screen.getByText(testMessage)).toBeInTheDocument()
})
So what do we think? Possible?