Interactive-diagrams updates

It has been some time since I’ve written a blog post. Today I would like to present you the latest updates from the interactive-diagrams project.

  • diagrams-ghcjs finally got text support

Big thanks to Joel Burget for the implementation.

The testsuite demo for the package can be found at http://co-dan.github.io/ghcjs/diagrams-ghcjs-tests/Main.jsexe/ as usual.

  • New interactive widgets

We got rid of the old wizard-like widgets in favour of “all-in-one” style widgets. Thanks to Brent and Luite who came up with the type trickery to get this done.

  • New flat design

In order to match your slick & flat iOS7 style I’ve rolled out an update to Bootstrap 3.

  • Ability to paste code with errors. (It’ll ask you to make sure you didn’t make a mistake by accident)
  • Ability to quickly include a number of imports. By checking the “import standard modules” checkbox you’ll automatically bring several useful modules into the scope:
    • Diagrams.Prelude
    • Diagrams.Backend.SVG (or Diagrams.Backend.GHCJS if you are making an interactive widget)
    • Data.Maybe
    • Data.Tuple
    • Data.List
    • Data.Char
  • Other minor UI fixes such as documentation improvement

GSoC 2013, an afterword

The Summer of Code 2013 is over, and here is what I have to say about it.

Introduction

The project is live at http://paste.hskll.org. The source code can be found at http://github.com/co-dan/interactive-diagrams.

I would like to say that I do plan to continue working on the project (and on adjacent projects as well if possible).

Interactive diagrams

Interactive diagrams is a pastebin and a set of libraries used for dynamically compiling, interpreting and rendering the results of user inputted code in a secure environment.

The user inputs some code and the app compiles and renders it. Graphical output alongside with code can be useful for sharing the experiments, teaching beginners an so on. If the users inputs a code that can not be rendered on the server (i.e.: a function), the app produces an HTML5/JS widget that runs the corresponding code.

The produced libraries can be used in 3rd party services/programs.

Screenshot

Screenshot

Technology used

The pastebin is powered by Scotty and scotty-hastache, the access to PosgreSQL db is done via the excellent persistent library. The compilation is done using GHC and GHCJS inside the workers processes powered by the restricted-workers library.

You can read some my previous report on this project which is still pretty relevant.

I plan on updating the documents on the wiki sometime soon.

Progress

The bad news is that I don’t think I was able to 100% complete what I originally envisioned. The good news is that I seem to know, almost exactly, what do I want to improve and how to do that. As I’ve mentioned I plan on continuing with the project and I hope that the project will grow and improve.

One thing that I felt was annoying is the (technical) requirement to use GHC HEAD. Because of that a lot of packages required updates and fixes. Due to changes in GHC and bugfixes in GHCJS I had to perform the tiring and not so productive procedure of updating all the necessary tools, rebuilding everything and so on. But I guess that’s just how computers work and I am sure that in the future (with the release of GHC 7.8 and a new Haskell Platform) the maintenance and installation will be much easier. Another thing that took a lot of my time was configuring the system and setting up the necessary security measures, which was also necessary.

Other stuff that kinda slowed thing down include: the lack of a good build system, in some cases non-American timezone (actually I think that the fact that my mentor, Luite Stegeman, was quite close to me in terms of timezones allowed us to communicated very frequently, as we did), the lack of knowledge of the tools I used (although you can think of it this way: I had an ability to learn exciting new things ;] ).

Among the grand things I plan to do: release a library for manipulating Haskell AST at the GHC level; make an IRC bot using the eval-api and restricted-workers; continue writing my notes/tutorials about GHC API (I have a few drafts laying around).

Some code refactoring should come along and a number of features for the pastebin should be implemented.

Feelings

When the end of the program was approaching I predicted that I would have that sort of conflicted feelings that you usually get when you finish reading a good book – one part of you feels happy because you had an enjoyable experience, yet another part of you doesn’t feel so giddy, because the thing that you enjoyed is over. Well, I didn’t get this with GSoC. I did feel happy, but I didn’t get this touch of sadness. GSoC was a way for me to get into collaborating with people on real-world open source projects, and the end of GSoC for me is a beginning of something else. I can use my experience now to write better code, write more code and write useful code.

Concluding

I had a very exciting summer and I would positively recommend anyone eligible to participate in the Google Summer of Code program. There is, however, a thing to remember. Programmers are known to be the kind of people who set ambitious goals. Reach for something inspiring, ambitious, yet realistic. Make sure to find something in between, that way you’ll have a concrete target that you know that you are able to achieve, but you also have a room for improvement.

PS. Acknowledgments

I would like to thank various people who helped me along the summer: Luite Stegeman, Brent Yorgey, Carter Schonwald, Daniel Bergey, Andrew Farmer; everyone in #diagrams, everyone in the #haskell channel who patiently answered my question; everyone on GitHub who responded to my comments, questions and pull requests. The Haskell community is lucky to have a huge amount of friendly and smart people.

WIP: GHCJS backend for the diagrams library

About

I’ve picked up the development of the diagrams-ghcjs backend for the infamous diagrams library. This backend is what we use for the interactive-diagrams pastebin and it renders diagrams on an HTML5 Canvas using the canvas bindings for ghcjs. The diagrams-ghcjs is a fork of (unmaintained?) diagrams-canvas backend by Andy Gill and Ryan Yates.

The library is still WIP and it requires bleeding edge versions of ghcjs, ghcjs-canvas and ghcjs’ shims to function.

The library is scheduled to be released together with ghcjs.

Demo

The current demo can be found at http://co-dan.github.io/ghcjs/diagrams-ghcjs-tests/Main.jsexe/. It is supposed to work in both Firefox and Chrome.

Problems

  • Text is not implemented. Some work is being done in the text-support branch. Generally, it has been proven hard to implement good text support, even diagrams-svg backend text support is limited;
  • Firefox still does not support line dashing via ‘setLineDash’ and ‘lineDashOffset’. As a result we need to have additional shims.

Pastebin update

I have updated the pastebin design and added some useful features.

Among with some minor tweaks the main changes are:

  • Author & title field added
  • Slick bootstrap design including buttons, pills and other web two oh stuff
  • Gallery of random images from the pastebin database
  • Two modes for viewing a paste: view mode and edit mode (edit mode still lacks a sophisticated JS editor)
  • Code highlighting in the view mode
  • Installed all the Acme packages on the server

Check the new website out: http://paste.hskll.org

If you have any suggestions regarding the design or the functionality of the web site please don’t hesitate to mail me or leave a comment.

View mode

View mode

New paste

New paste

Interactive-diagrams GSoC progress report

Intro

As some of you may already know, I’ve published the first demo version of the interactive-diagrams online, it can be found at http://paste.hskll.org (thanks to my mentor Luite Stegeman for hosting). It’s not very interactive yet, but it’s a good start. At the same time it took me a while to get everything up and running so in this blog post I would like to describe and discuss the overall structure and design of the project along with some details about the vast number of security restrictions that are being used.

Please note that http://paste.hskll.org is just a demo version and I can guarantee neither the safety of your pastes nor the uptime of the app. The ‘release notes’ can be found here.

If you have any suggestions or bug reports don’t hesitate to mail me (difrumin аt gmail dоt com) or use the bugtracker.

System requirements

GNU/Linux operating system, GHC 7.7 (I think it’s possible to make the whole thing work with GHC 7.6 but I don’t have time to support it and test everything), lots of RAM. In order to use some security restrictions you will also need SELinux and cgroup.

High-level structure

The whole program consists of three main components (it would be better to say three main types of components since there are usually multiple workers in the system):

  • The web app (sources can be found in scotty-pastebin), powered by WAI, Scotty and Blaze;
  • The service app (eval-api/src-service);
  • Workers (eval-api/src).

The web server handles user requests, database logic, and renders the results. Workers are the processes that perform the actual evaluation. The service component is the one that handles the pool of workers, keeps track of how many workers are available and forks new workers if necessary. The web app does not communicate with workers without the permission of the service.

All the communication between the components is performed with the help of UNIX sockets.

Request example

Here’s an example workflow in the system:

  1. User connects to the web server, sends the request to evaluate a certain bit of code.
  2. Web server talks to the service, requesting a worker.
  3. Server reuses an existing worker if an idle one exists. Otherwise it forks a new one or blocks if the limit is reached.
  4. Worker, upon starting, loads the necessary libraries and applies security restrictions.
  5. The web server receives a worker and sends it a request for evaluation.
  6. The server waits, if there is no reply from the worker after a certain amount of time, it sends a message to the service saying that the worker timed out. If the web service receives the reply, it stores the result in the database and continues with the user request.
  7. When the service receives message about one if its workers it decides whether to kill/restart it or not. If the worker’s process has timed out or results in an error (eg: out of memory exception) then the service restarts it.

Component permissions

Setting up the right permissions for the components is a crucial part in creating a secure environment. Depending on what security restrictions you have enabled you might want to choose different permissions for the processes. On http://paste.hskll.org we use the full set of security restrictions and limits (see the next section) and it requires us to give the components certain permissions.

  • scotty-pastebin is run as a user in a multithreaded runtime;
  • eval-service is run as a superuser (required for setting up chrooted jails) in a single-threaded environment (required due to forking/SELinux restrictions, see the SELinux section for details), listens on the ‘control’ socket;
  • workers are forked from eval-service as root, but they change their processes’ uid as soon as possible, listens on the ‘workerN’ socket (opened prior to chroot’ing);

Additionally the whole thing runs in a VM.

See also this wiki page written by luite

Security limitations and restrictions

Interactive-digrams applies a whole lot of limitations to the worker processes, which can be configured using the following datatype:

data LimitSettings = LimitSettings
    { -- | Maximum time for which the code is allowed to run
      -- (in seconds)
      timeout     :: Int
      -- | Process priority for the 'nice' syscall.
      -- -20 is the highest, 20 is the lowest
    , niceness    :: Int
      -- | Resource limits for the 'setrlimit' syscall
    , rlimits     :: Maybe RLimits
      -- | The directory that the evaluator process will be 'chroot'ed
      -- into. Please note that if chroot is applied, all the pathes
      -- in 'EvalSettings' will be calculated relatively to this
      -- value.
    , chrootPath  :: Maybe FilePath
      -- | The UID that will be set after the call to chroot.
    , processUid  :: Maybe UserID
      -- | SELinux security context under which the worker 
      -- process will be running.
    , secontext   :: Maybe SecurityContext
      -- | A filepath to the 'tasks' file for the desired cgroup.
      -- 
      -- For example, if I have mounted the @cpu@ controller at
      -- @/cgroups/cpu/@ and I want the evaluator to be running in the
      -- cgroup 'idiaworkers' then the 'cgroupPath' would be
      -- @/cgroups/cpu/idiaworkers@
    , cgroupPath  :: Maybe FilePath
    } deriving (Eq, Show, Generic)

There is also a Default instance for LimitSettings and RLimits with most of the restrictions turned off:

defaultLimits :: LimitSettings
defaultLimits = LimitSettings
    { timeout    = 3
    , niceness   = 10
    , rlimits    = Nothing
    , chrootPath = Nothing
    , processUid = Nothing
    , secontext  = Nothing 
    , cgroupPath = Nothing
    }

Below I’ll briefly describe each limitation/restriction with some details.

Timeout & niceness

The timeout field specifies how much time (in seconds) the server waits for the worker. (Note: this is the only limitation that is controlled on the side of the web server. The corresponding procedure is processTimeout. We really want this to be run in the multithreaded environment)

Niceness is merely the value passed to the nice() syscall, nothing special.

rlimits

The resource limits are controlled by syscalls to setrlimit. The limits itself are defined in the RLimits datatype:

data RLimits = RLimits
    { coreFileSizeLimit :: ResourceLimits
    , cpuTimeLimit      :: ResourceLimits
    , dataSizeLimit     :: ResourceLimits
    , fileSizeLimit     :: ResourceLimits
    , openFilesLimit    :: ResourceLimits
    , stackSizeLimit    :: ResourceLimits
    , totalMemoryLimit  :: ResourceLimits
    } deriving (Eq, Show, Generic)

ResourceLimits itself is defined in System.Posix.Resource. For more information on resource limits see setrlimit(3).

Chrooted jail

In order to restrict the worker process we run it inside the chroot jail. The easiest way to create a fully working jail is to use debootstrap. It’s also necessary to install gcc and GHC libraries inside the jail.

mkdir
sudo debootstrap wheezy /idia/run/workers/worker1
sudo chmod  /idia/run/workers/worker1
cd /idia/run/workers/worker1
sudo mkdir -p ./home/
sudo chown  ./home/
cd ./home/
mkdir .ghc && sudo mount --bind ~/.ghc .ghc
mkdir .cabal && sudo mount --bind ~/.cabal .cabal
mkdir ghc && sudo mount --bind ~/ghc ghc # ghc libs
cd ../..
cp ~/interactive-diagrams/common/Helper.hs .
sudo chroot .
apt-get install gcc # inside the chroot

I tried installing Emdebian using multistrap to reduce the size of the jail, but GHC won’t run properly in that environment, complaining about librt.so (which was present in the system), so I decided to stick with debootstrap. If anyone knows how to avoid this problem with multistrap please mail me or leave a comment.

Process uid

This is the uid the worker process will run under. The socket file will also be created by the user with this uid.

SELinux

SELinux (Security-enhanced Linux) is a Linux kernel module providing mechanisms for enforcing fine-grained mandatory access control, brought to you by the creators of infamous PRISM!

SELinux allows the system administrator to control the security of the system by specifying in (modular) policy files the AVC (access vector cache) rules. The SELinux kernel module sits there and monitors all the syscalls and, if it finds something that is not explicitly allowed in the policy, it blocks it. (Well, actually something a little bit different is going on, but for the sake of simplicity I am leaving it out)

Everything on your system – files, network sockets, file handles, processes, directories – is labelled with a SELinux security context, which consists of a role, a user name (not related to the regular system user name) and a domain (also called type in some literature). In the policy file you specific which domains are allowed to perform various actions on other domains. A typical piece of the policy file will look like this:

allow myprocess_t self:udp_socket { create connect };
allow myprocess_t bin_t:file { execute };

The first line states that the process from the domain myprocess_t is allowed to create and connect to the UDP sockets of the same domain. The second line allows a process in that domain to execute files of type bin_t (usually files in /bin/ and /usr/bin).

Note: the secontext field actually contains only the
security domain. When the worker process is changing the security
context it uses the same user/resource as it originally had.

In our SELinux policy we have several domains:

  • idia_web_t - the domain under which the scotty-pastebin run
  • idia_web_exec_t - the domain of the scotty-pastebin executable and other files associated with that binary
  • idia_service_t - the domain under which the eval-service run
  • idia_service_exec_t - the domain of the eval-service executable and other files associated with that binary
  • idia_service_sock_t - UNIX socket files used for communication
  • idia_db_t, idia_pkg_t, idia_web_common_t - database files, packages, html files, templates and other stuff
  • idia_worker_env_t - chroot’ed environment in which the worker operates
  • idia_restricted_t - the most restricted domain in which the workers run and evaluate code

The reason we made the service program run in a single-threaded environment is the following: if we run it in the multi-threaded environment (like we wanted) the worker processes want to have access to file descriptors, inherited from the idia_service_t, which, of course, is dangerous and should not be allowed.

I personally don’t enjoy using SELinux very much. It’s very hard to configure it and among its shortcomings I can list the fact that there is no distinction between file types and process types; there is no proper separation, even when using the modular policy, as the duplicated types are checked when you load the module and there is no way (that I know of) to easily introduce a fresh unused type. And there is this thing that puzzled me for quite a while: home directories are treated specially. Even if you configure a subdirectory in your home dir to have a specific security context, restorecon won’t correctly install the context specified in the policy. You actually have to set he context yourself, using chcon.

Cgroups

Cgroups is the system that can be used to aid the way Linux schedules CPU time/shares, distributes memory to the processes. It does so by organizing processes into hierarchical groups with configured behaviour.

Installing cgroups on debian is somewhat tricky, because the package is a little bit weird.

sudo apt-get install cgroup-bin libcgroup1
sudo cgconfigparser -l ~/interactive-diagrams/cgconfig.conf

For our purposes we have a cgroup called idiaworkers. We also mount the cpu controller on /cgroups/cpu:

$> ls -l /cgroups/cpu/
total 0
-rw-r--r--. 1 root root 0 Jul 12 16:22 cgroup.clone_children
--w--w--w-. 1 root root 0 Jul 12 16:22 cgroup.event_control
-rw-r--r--. 1 root root 0 Jul 12 16:22 cgroup.procs
-rw-r--r--. 1 root root 0 Jul 12 16:22 cpu.shares
drwxr-xr-x. 2 root root 0 Jul 12 16:22 idiaworkers
-rw-r--r--. 1 root root 0 Jul 12 16:22 notify_on_release
-rw-r--r--. 1 root root 0 Jul 12 16:22 release_agent
-rw-r--r--. 1 root root 0 Jul 12 16:22 tasks
$> ls -l /cgroups/cpu/idiaworkers
total 0
-rw-r--r--. 1 root    root 0 Jul 12 16:22 cgroup.clone_children
--w--w--w-. 1 root    root 0 Jul 12 16:22 cgroup.event_control
-rw-r--r--. 1 root    root 0 Jul 12 16:22 cgroup.procs
-rw-r--r--. 1 root    root 0 Jul 12 16:22 cpu.shares
-rw-r--r--. 1 root    root 0 Jul 12 16:22 notify_on_release
-rw-r--r--. 1 vagrant root 0 Jul 14 06:21 tasks

In order to modify how much CPU time our group gets, we write to the cpu.shares file: sudo echo 100 > /cgroups/cpu/idiaworkers/cpu.shares. If we want to add the task/process to the group we simply append the tasks file: echo $PID >> /cgroups/cpu/idiaworkers/tasks. The workers append themselves to the task file automatically (if the cgroup restrictions are enabled in the LimitSettings).

Open problems/requests

I am still not sure how do I write tests for this project. Do I write tests for my GHC API wrappers? Do I write tests for my workers pool? I probably should take a look how similar projects handles those.

Outro

So, as you can see, we have something working here and now that we manage to take the initial steps it will be much easier for us to push changes and make them available for public to use and comment on. There is still a long way to come. The code needs some serious cleanup (we’ve switched the design model a couple of weeks ago, which affected the internal structure seriously), the documentation needs to be written. And of course new features are waiting to be implemented :) We will be supporting multiple UIDs for workers and looking into using LXC for simplifying the setup process too.

I would like to thank augur and luite for their editorial feedback.

Stay tuned for the next posts about configuring the program for evaluation settings and reusing the components from the library.

Building GHCJS

1 Intro

In this post I would like to talk about my experience with
bootstrapping GHCJS using the provided facilities ghcjs-build. I
never used tools like Vagrant or Puppet before so all of this was
kinda new to me.

2 Initial installation

GHCJS can’t actually work with vanilla GHC 7.* as it requires to
apply some patches (in order to get JS ffi to work, it adds
JavaScriptFFI language extension among other modifications).

ghcjs-build uses Vagrant (a tool for automatically building and
running work environments) to mange the work environment, so prior to
running GHCJS you need to install vagrant and VirtualBox. It's actually
a sensible way to tackle a project like that: everyone has similar
work environments, you don't have to mess with your local GHC
installation. It also make use of Puppet deployment system in
puppetlabs-vcsrepo module for cloning Git repositories.

Currently, there are two ways to start up GHCJS using ghcjs-build

2.1 Using the prebuilt version

git clone https://github.com/ghcjs/ghcjs-build.git
cd ghcjs-build
git checkout prebuilt
vagrant up

Using this configuration the following procedures are performed:

  1. Vagrant sets up a 32-bit Ubuntu Precise system (/Note: if this is
    your first time running Vagrant it downloads the 280Mb
    precise32.box file from the Vagrant site/)
  2. Vagrants does some provisioning using Puppet (downloads and
    installs necessary packages)
  3. A 1.4GB archive with ghcjs and other prebuilt tools are downloaded
    and extracted.

2.2 Compiling from source

git clone https://github.com/ghcjs/ghcjs-build.git
cd ghcjs-build
vagrant up

Apart from setting up the box this will

  1. Get the GHC sources from Git HEAD and applies the GHCJS patch.
  2. Get all the necessary packages for ghcjs
  3. Get the latest Cabal from Git HEAD, applies the GHCJS patch and
    build it.
  4. Compile the necessary libraries using ghcjs
  5. Compile ghcjs-examples and its dependencies (it appears that it
    can take a lot of time to compile gtk2hs and gtk2hs's tools)

Please note, that depending on your computer, you might want to go for
a long walk, enjoy a small book or get a night sleep (assuming you are
not scared by the sound of computer fans).

Apart from being slow, the process of compiling everything from
source is error prone. To give you a taste, last night I was not able
to reproduce a working environment myself, because of some recent
changes in GHC HEAD. The prebuilt version on the other hand is
guaranteed to install correctly.

Hopefully, the GHCJS patches will be merged upstream before the GHC
7.8 is out. That way you won't need to partake in building GHC from
the source in order to use GHCJS.

2.3 Communicating with the VM

After you've finished with the initial setup you should be able just
to

vagrant ssh

in your new vm and start messing around.

ghcjs command is available to you and Vagrant kindly forwards the
3000 port on the VM to the local 3030 port, allowing you to run web
servers like warp on the VM and accessing them locally.

You can access your local project directory under /vagrant in VM:

$ ls /vagrant
keys  manifests  modules  outputs  README.rst  Vagrantfile

However, copying file back-and-forth is not a perfect solution. I
recommend setting up a sshfs filesystem (Note: if you are on OSX,
don't forget to install fuse4x kernel extension
):

$ vagrant ssh-config
  Host default
    HostName 127.0.0.1
    User vagrant
    Port 2222
    UserKnownHostsFile /dev/null
    StrictHostKeyChecking no
    PasswordAuthentication no
    IdentityFile "/Users/dan/.vagrant.d/insecure_private_key"
    IdentitiesOnly yes
    LogLevel FATAL
$ sshfs vagrant@localhost:/home/vagrant ../vm -p2222 -oreconnect,defer_permissions,negative_vncache,volname=ghcjs,IdentityFile=~/.vagrant.d/insecure_private_key 
$ ls ../vm

When you are done you can just umount ../vm

3 Compiling other packages

Since the diagrams package on Hackage depends on the older version
of base we are going to use the latest version from Git:

mkdir dia; cd dia
git clone git://github.com/diagrams/diagrams-core.git
cd diagrams-core && cabal install && cd ..

cabal unpack active
cd active-0.1*
cat >version.patch <<EOF
--- active.cabal        2013-06-12 12:58:40.082914214 +0000
+++ active.cabal.new    2013-06-12 12:58:31.029465815 +0000
@@ -19,7 +19,7 @@

 library
   exposed-modules:     Data.Active
-  build-depends:       base >= 4.0 && < 4.7,
+  build-depends:       base >= 4.0 && < 4.8,
                        array >= 0.3 && < 0.5,
                        semigroups >= 0.1 && < 0.10,
                        semigroupoids >= 1.2 && < 3.1,
@@ -31,7 +31,7 @@
 test-suite active-tests
     type:              exitcode-stdio-1.0
     main-is:           active-tests.hs
-    build-depends:     base >= 4.0 && < 4.7,
+    build-depends:     base >= 4.0 && < 4.8,
                        array >= 0.3 && < 0.5,
                        semigroups >= 0.1 && < 0.10,
                        semigroupoids >= 1.2 && < 3.1,
EOF
patch active.cabal < version.patch
cabal install
cd ..

git clone git://github.com/diagrams/diagrams-lib.git
cd diagrams-lib && cabal install && cd ..

git clone git://github.com/diagrams/diagrams-svg.git
cd diagrams-svg && cabal install && cd ..

Other packages I had to install already had their Hackage versions
updated.

Now you can try to build a test diagram to see that everything works

module Main where

import Diagrams.Prelude
import Diagrams.Backend.SVG.CmdLine

d :: Diagram SVG R2
d = square 20 # lw 0.5
              # fc black
              # lc green
              # dashing [0.2,0.2] 0

main = defaultMain (pad 1.1 d)

then you can compile and run it

ghc --make Test.hs 
./Test -w 400 -o /vagrant/test.svg

Screen Shot 2013-06-12 at 5.19.03 PM

And that's it!

4 Outro

I would also like to note that we are currently polishing the GHCJS
build process. Luite, especially is working on making ghcjs work (and
run tests) with Travis CI (it take quite a bit of time to build ghcjs
and sometimes travis is timeouting) and I am working on tidying up
the build config.

Stay tuned for more updates.

Summer of Code

Hello, everyone!

I’ve decided to reinstate this blog since I’ve got accepted to this year’s Google Summer of Code program. I’ll blog about my updates, stuff that I’ve been working on and bottlenecks and problems I’ve encountered.

My project is a pastebin site using diagrams and GHCJS to generate embeddable interactive widgets and static images/text in case when the pasted code does not require additional interaction. My mentor is Luite Stegeman, and Brent Yorgey and other nice people from the diagrams community has agreed to help.

I am very excited about this and happy that I’ve got a whole bunch of smart people to help me with this.

Unfortunately, as we haven’t sorted out a completely safe way to evaluate code coming from 3rd parties, there is no public version hosted anywhere yet. Meanwhile, there is a project on GitHub.

Hopefully, soon I’ll be able to publish a post about my experience with bootstrapping GHCJS.
Until then, stay tuned!