Giter Club home page Giter Club logo

perun's Introduction

perun-logo-big

Simple, composable static site generator built on top of the Boot. Inspired by Boot task model and Metalsmith. Perun is a collection of boot tasks that you can chain together and build something custom that suits your needs. Please check out our Getting Started guide.

HELP NEEDED!

This project is currently unmaintained! Want to help? Please see #241.

If you need to create website or a blog, please consider using Montaigne, which is aproject by Anton

For information and help

Clojurians slack (join) has a channel #perun for talk about Perun.

Check SPEC.md for documentation about metadata keys used by built-in tasks.

Plugins

See the Built-Ins Guide for a list of built-in tasks.

3rd party useful plugins

There are plenty of Boot plugins that can be useful in the when you are using perun:

Version

Perun is currently tested against Boot 2.8.2. Higher versions are blocked by boot-clj/boot#745.

Plugins system

Everything in perun is build like independent task. The simplest blog engine will look like:

(deftask build
  "Build blog."
  []
  (comp (markdown)
        (render :renderer renderer)))

But if you want to make permalinks, generate sitemap and rss feed, hide unfinished posts, add time to read to each post then you will do:

(deftask build
  "Build blog."
  []
  (comp (markdown)
        (draft)
        (ttr)
        (slug)
        (permalink)
        (render :renderer renderer)
        (sitemap :filename "sitemap.xml")
        (rss :site-title "Hashobject" :description "Hashobject blog" :base-url "http://blog.hashobject.com/")
        (atom-feed :site-title "Hashobject" :description "Hashobject blog" :base-url "http://blog.hashobject.com/")
        (notify)))

You can also chain this with standard boot tasks. E.x. if you want to upload generated files to Amazon S3 you might use boot-s3 plugin.

Then your code might look like this:

(deftask build
  "Build blog."
  []
  (comp (markdown)
        (render :renderer renderer)
        (s3-sync)))

Use cases

  • Generate blog from markdown files.
  • Generate documentation for your open source library based on README.
  • Any case where you'd want to use Jekyll or another static site generator.

Examples

A minimal blog example, included in this repo. See build.boot

Real-world websites created with perun:

How does it work

Perun works in the following steps:

  1. Read all the files from the source directory and create fileset metadata (:metadata (meta fileset) with all meta information available for all tasks/plugins
  2. Call each perun task/plugin to manipulate the fileset metadata
  3. Write the results to the destination/target directory

Perun embraces Boot task model. Fileset is the main abstraction and the most important thing you should care about. When you use perun you need to create custom task that is a composition of standard and 3d party tasks/plugins/functions. Perun takes set of files as input (e.x. source markdown files for your blog) and produces another set of files as output (e.x. generated deployable html for your blog).

Fileset passed to every task has metadata (:metadata (meta fileset). This metadata contains accumulated information from each task. More info about structure of this metadata is coming.

Install

[perun "0.3.0"]

Usage

Create build.boot file with similar content. For each task please specify your own options. See documentation for each task to find all supported options for each plugin.

(set-env!
  :source-paths #{"src"}
  :resource-paths #{"resources"}
  :dependencies '[[org.clojure/clojure "1.7.0"]
                  [hiccup "1.0.5"]
                  [perun "0.2.0-SNAPSHOT"]
                  [clj-time "0.9.0"]
                  [hashobject/boot-s3 "0.1.2-SNAPSHOT"]
                  [jeluard/boot-notify "0.1.2" :scope "test"]])

(task-options!
  pom {:project 'blog.hashobject.com
       :version "0.2.0"}
  s3-sync {
    :bucket "blog.hashobject.com"
    :source "resources/public/"
    :access-key (System/getenv "AWS_ACCESS_KEY")
    :secret-key (System/getenv "AWS_SECRET_KEY")
    :options {"Cache-Control" "max-age=315360000, no-transform, public"}})

(require '[io.perun :refer :all])
(require '[hashobject.boot-s3 :refer :all])
(require '[jeluard.boot-notify :refer [notify]])

(defn renderer [{global :meta posts :entries post :entry}] (:name post))

(defn index-renderer [{global :meta files :entries}]
  (let [names (map :title files)]
    (clojure.string/join "\n" names)))

(deftask build
  "Build blog."
  []
  (comp (markdown)
        (draft)
        (ttr)
        (slug)
        (permalink)
        (render :renderer renderer)
        (collection :renderer index-renderer :page "index.html")
        (sitemap :filename "sitemap.xml")
        (rss :site-title "Hashobject" :description "Hashobject blog" :base-url "http://blog.hashobject.com/")
        (s3-sync)
        (notify)))

After you created build task simply do:

boot build

Tips

Debug

To see more detailed output from each task (useful for debugging) please use --verbose boot flag. E.x. boot --verbose dev

Development setup

Perun is static site generator. So usually you'd use it by just running boot build which will generate your static site. This process is robust and great for production but it's slow and lacks fast feedback when you're developing your site locally. In order to solve this problem we recommend following setup:

  1. Have 2 separate tasks for building local version and production version. E.x. build-dev and build.
  2. Include boot-http into your build.boot file. This will enable serving your site using web server.
  3. Create task dev that will call build-dev on any change to your source files:
  (deftask dev
    []
    (comp (watch)
          (build-dev)
          (serve :resource-root "public")))
  1. Runboot dev In such setup you will have HTTP web server serving your generated content that would be regenerated every time you change your source files. So you'd be able to preview your changes almost immediately.

Auto deployment

It's quite easy to setup automatic static site deployment. E.x. you have GitHub repo for your blog and you are using boot-s3 to sync files to Amazon S3. In this case it's possible to setup flow in a way that every commit to GitHub would be built on Heroku using perun and deployed to AWS S3.

Assuming you have setup similar to example in order to achieve this you need to:

  • create Heroku application for your GitHub repo with build.boot file
  • ensure that build.boot has build task that has tasks build and deploy tasks
  • specify AWS_ACCESS_KEY and AWS_SECRET_KEY envs. They are mandatory for the boot-s3 plugin).
  • add boot/perun buildpack heroku buildpacks:add https://github.com/hashobject/heroku-buildpack-perun
  • enable GitHub integration https://devcenter.heroku.com/articles/github-integration
  • change your site in GitHub and see changes deployed to AWS S3 in few minutes

Similar auto deployment can be configured using CircleCI too.

Contributions

We love contributions. Please submit your pull requests.

Main Contributors

Copyright and License

Copyright © 2013-2019 Hashobject Ltd ([email protected]) and perun Contributors.

Distributed under the Eclipse Public License.

perun's People

Contributors

adamfrey avatar alandipert avatar allentiak avatar arichiardi avatar ballpointcarrot avatar bhagany avatar daemianmack avatar deraen avatar edannenberg avatar giuliano108 avatar ikitommi avatar jaidetree avatar jstaffans avatar magemasher avatar martinklepsch avatar martygentillon avatar mrchrisadams avatar munen avatar nicorikken avatar podviaznikov avatar robsteranium avatar wiseman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

perun's Issues

boot runs successful bot no files are written to public

I'm trying to get a perun set up. I got to point where it doesn't throw any errors when I run boot build, but I does not write any files to public, while it claims that it does.

[slug] - added slugs to 2 files
[ttr] - added TTR to 2 files
[word-count] - added word-count to 2 files
[permalink] - added permalinks to 2 files
[canonical-url] - added canonical urls to 2 files
[build-date] - added date-build to 2 files
[gravatar] - added gravatar to 2 files
[render] - rendered 2 pages
[collection] - rendered collection index.html
[inject-scripts] - injected 0 scripts into 3 HTML files
[sitemap] - generated sitemap and saved to public/sitemap.xml

My build.boot file is

https://gitlab.com/200ok/200ok.gitlab.io/blob/master/build.boot

Use a more specific key than `:metadata`

Currently perun related metadata is stored in :metadata in the vars metadata. Inception.
It seems like a good idea to me to use a more descriptive perun-specific key to avoid collisions with other tasks and just overall be clearer about what's in there.

Simple suggestion: :io.perun.

Allow markdown task to use an external markdown parser

I've added this in a fork over here. I wanted this so I could use Pandoc instead of Endophile for markdown parsing. Would you be interested in supporting this ?

;; argument is an edn vector of strings
(markdown :cmd ["pandoc"])

Thanks for the library by the way, really cool idea.

New plugin: meta

I want to have plugin where I can specify some global settings that would be accessible to all templates.
E.x. I'm thinking that base_url, site_name, copyright_dates are repetitive and can be used on any page.
Right now there os no way of specifying such options globally for all tasks.

I'm thinking creating super simple task called meta. Some thing like this:

(comp (meta {:name "Hashobject Blog" :base_url "http://blog.hashobject.com"})
          (markdown)
           (draft)
          ...)

Thank this meta can be merge with every file meta, or it can be accessible as (:global-metadata (meta fileset)).

look for metadata with dashes by default

Instead of looking for metadata like date_published be default it seems more idiomatic to look for date-published. As far as I understand both are valid YAML keys.

Happy to create a PR if wanted.

Priorities and features

@martinklepsch and @Deraen let me know what do you think are the most important tasks to work on.

I can see following potential tasks (add more if yours are different):

  • add new tasks/plugins
  • improve existing plugins/tasks (E.x. I think I still have bug where sitemap ignore generated pages from the collections plugin)
  • improve/Rewrite Readme
  • improve error handling in Perun. E.x. validate symbol options

draft task broken in master

Probably due to recent refactorings, the draft task is currently broken: it doesn't actually cause the drafts not to be rendered (or shown in collections).

Here's a version that works for me:

(deftask remove-draft
  []
  (with-pre-wrap fileset
    (let [files (filter identity (pm/get-meta fileset))
          updated-files (->> files
                             (map #(cond-> % (:draft %) (dissoc :content :include-atom :include-rss))))]
      (pm/set-meta fileset updated-files))))

New plugin: highlight code

Might require splitting the render if half so hiccup can be modified. Or could maybe work on existing html. (#30).

But first we need highlighting lib usable from Boot. There doesn't seem to be any maintained Java implementations. Jygments looks the most useful but it is not released on maven.

Instead we could use a pod with Jython and use Pygments (a python lib). Example https://github.com/tailrecursion/boot-hoplon/blob/master/src/tailrecursion/boot_hoplon/pygments.clj.

Feature: pass pre-parse function to markdown task

Use case:

Our editors want to set, for example, the rel or target attribute for links while using markdown syntax. Inspired by Jekyll I extended the markdown syntax for our frontend editor like so:

[foo](bar.html){:rel "nofollow" :blank? true :title "foo bar"}

The tag is handled (replaced with final html) via re-seq/read-string before the frontend markdown parser does it's magic for the preview. Now I would like to be able to pass a :pre-parse-fn to the perun markdown task to do the same for the backend part.

Took a stab at the feature but as the markdown parsing happens in it's own pod this seems impossible?
Ideally the passed function would run after https://github.com/hashobject/perun/blob/master/src/io/perun/markdown.clj#L97

I could probably create a separate task that does the pre processing before the markdown task but would like to hear your opinions first. Happy to create a PR if this is somehow feasible.

Unable to resolve var: s3-sync in this context

I get this error when I use the example code in the README:

 ❯ boot dev                                                                                                                                                                                    [12:26:52]
clojure.lang.ExceptionInfo: Unable to resolve var: s3-sync in this context
    data: {:file
           "/var/folders/8j/27sz_2rj0kd1b3v9y4n1_1t00000gn/T/boot.user3794536342121430912.clj",
           :line 19}
java.lang.RuntimeException: Unable to resolve var: s3-sync in this context

I checked the version on the s3-sync and it seems to match. Is this a weird Maven thing on my machine?

Key metadata by path instead of filename

Metadata in perun should be keyed by the path relative to the fileset instead of filenames. Plain filenames can be overriden by files with the same name.

I'm also thinking of redundantly adding the path to the metadata map which would easily allow filtering and grouping by path without having to deal with MapEntries all the time.

A small logo change

Hi, I would like to suggest a little change to the current logo on the GitHub front page.

Currently the pinkish and blueish triangles seem misaligned: the lower left edge of the pink triangle passes close to the left edge of the blue internal triangle. It seems a bit off.

You could possibly fix it by:

  • aligning the edges more precisely,
  • making the triangles non-transparent.

Just a suggestion. Thanks.

New plugin: Asciidoctor

I'm working on an Asciidoc parser using Asciidoctor. The Ruby commands are run in boot-jruby. As it is being used for documentation, the intention is to use the diagram back-end for generating PlantUML and Graphviz images to be included in the final doc. Generating pdf versions on the side would be another option, but in my opinion this goes beyond the goals of Perun.

I have a couple of points worth discussing:

  1. One of our current documents has too many characters to be handled in the eval statement when handling pods (concerning the render stage currently). I'm considering a file-based solution for HTML inclusion. So the Asciidoctor plugin would generate a (partial) HTML file and adds the referred filename in say a :include key in the metadata. This allows pages to refer to this file, rather than load the value from the :content key. This slurp could also be handled within the Asciidoctor plugin to end up with a similar behavior as say the Markdown plugin.
  2. Asciidoc offers several built-in attributes to handle metadata. I plan on parsing the Asciidoc file to extract these attributes and include these into the metadata to be reference.
  3. Asciidoc offers the ability to evaluate the file for different inline inclusion macros. My use-case uses these ifdef statements to include code examples for different languages. We intend to generate the document for these different attributes. This evaluation could be done by parsing all ifdef commands and just evaluating all options. A more conservative way would be to either include the attributes in the plugin call, and/or announcing the desired evaluation attributes in a commented asciidoc file header.

Pegdown has been deprecated; pick a new markdown parser

I just noticed that Pegdown has been EOL'd as of five days ago (see README: https://github.com/sirthias/pegdown), so I think a replacement is probably worth it for Perun. I've been toying with the idea of adding Pandoc support, and there are a few other mentions of this in other issues, but I'm not sure if Pandoc can support all of the same features that Pegdown does. Pegdown's deprecation notice recommends switching to flexmark-java, which is apparently a drop-in replacement.

I can integrate one or both of these, but I'd like to get some opinions on what you'd like to see, whether Pandoc, flexmark-java, some other option, or possibly multiple integrations. Let me know what you think.

Split render tasks in two

The render and collection task implementations could be split into render-hiccup, collection-hiccup and write-html tasks I think.

A use case would be to do some transformations to hiccup in between:

(comp (render-hiccup :rendered 'foobar/post)
      (highlight)
      (write-html :out "public/post/foobar.html"))

render and collection could still work as is, they would just combine two other tasks:

(deftask render [...]
  (comp (render-hiccup :rendered renderer) (write-html :out ...)))

I could implement this when I have need for this. For now I'm using Highlight.js.

Use global metadata for feed options

I use global metadata to set the title and url of the site so it would be very useful if I could use that for feeds also. As the global metadata schema is not defined this is not very straightforward.

Alternatives:

  1. Define some global metadata keys which will be used by built-in tasks if set
  2. Add options to the tasks to set which keys to use

Task for rendering static pages

I'm currently doing something like this:

(p/collection :renderer 'org.martinklepsch.blog/error-page :page "error.html")

but dropping any of the passed arguments, i.e. generating a truly static html file:

(defn error-page [_ _]
  (base ...))

While using the collection task works perfectly well in this case it might make sense to provide a general task for this usecase. Potentially task options could be used to pass arguments to the render-fn.

New plugin: Atom XML

RSS doesn't contain the complete post content so it's quite useless, I guess Atom XML is more common these days. Luckily it's trivial to write the XML using data.xml even in the project, but I guess this is so common that we should provide an existing task for this:

(ns blog.views.atom
  (:require [clojure.data.xml :as xml]
            [blog.dates :refer [datestr]]
            [blog.views.common :as common]))

(defn render
  [{:keys [author base-url site-title]}
   posts]
  (xml/emit-str
    (xml/sexp-as-element
      [:feed {:xmlns "http://www.w3.org/2005/Atom"}
       [:title site-title]
       [:link {:href (str base-url "atom.xml") :rel "self"}]
       [:link {:href base-url}]
       ; [:updated "fixme"]
       [:id base-url]
       [:author
        [:name (:name author)]
        [:email (:email author)]]
       (for [{:keys [canonical-url content name date-published]} (take 10 posts)]
         [:entry
          [:title name]
          [:link canonical-url]
          ; (if date-updated [:updated date-updated])
          [:content {:type "html"} (str content)]
          ])])))

require renderer ns before evaluating with arguments

I have a renderer in app.static that eventually gets some records as arguments. The records are defined in namespace app.records. app.static transitively requires app.records.

When the renderer is called with it's arguments (the records) the arguments seem to be deserialized before the renderer-fn is called. Because of this I get an exception like this one:

java.lang.ClassNotFoundException: app.records.Track

I solved the issue manually now by running the following before the task is executed:

(boot.pod/require-in @perun/render-pod 'app.static) ; transitively requires app.records

but not having to do that would be nice. For some reason the require-in docs state "Avoid this function." but I'm not sure why. Maybe @micha or @alandipert can weigh in on that.

Faster recompilation

Currently when using this together with watch task with 20 posts recompilation takes maybe 1.5 seconds. This could probably be made much faster by only parsing the changed md files.

Idea:

  • Instead of writing metadata to a file, assoc it to fileset metadata
  • Store metadata as a map of md file -> metadata
  • Markdown task would check for changed files and update those in the metadata

remove +defaults+

Would you accept a PR removing +defaults+ looks like dead code and in my opinion task option defaults should probably be documented on a per task basis.

canonical_url

Each post should have canonical-url property. This can be calculated as "domain"+"post url"

Reload blog specific Clojure code

If I change e.g. post or collection render function I have to restart boot to see the changes. I can think of two alternatives to make this work with watch task so that the changes are picked up instantly.

  1. A task witch uses clojure.tools.namespaces refresh
  2. render and collection tasks are instead run inside a new pod each time

I would probably prefer the second option. It should be quite fast when used with pod pool.

Deprecate draft task

I think the draft task does not provide a lot of value and could be easily emulated by using (set-env! :resource-paths ...) and the likes. Less code to maintain and users learn something about boot which might be useful to explore their own ideas etc.

Specify dedicated location for input files

All files located in folders defined as :resources will be candidates for the perun tasks. Is there a way to specify a dedicated location/fileset that will be available to perun tasks only?

`permalink` depends on `slug`

By default, the permalink task only works if the slug ask was run previously. This caught me off-guard.

Possible solutions:

  • call slug if metadata key not already present?
  • throw an exception at runtime if slog is not present
  • warn at runtime if slug is not present
  • add documentation

New plugin: hyphenate

Depending on which phase this is done, might require splitting rending task (#30) or something. Best candidates are probably to hyphenate hiccup data or html.

I'm working on converting python hyphenation implementation to Clojure and once that is done it should be quite easy to implement a task for this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.