Blog

  • Tyranny of Virtualization

    Last Christmas I finally upgraded my 7 year old Macbook. I backed all my pictures, code, and dot files. But I left one very major thing out — I forgot to copy over my private keys. These were tied to all my AWS deploy keys deployed with Terraform over the past couple of years. This means I couldn’t get into my server to renew my SSL certs, (which for some awful reason that I couldn’t debug), did not update automatically. So I locked myself out of my own fortess, great. But I realized in retrospect that my whole set up to host this blog had been perhaps too close to the metal. I was interested in learning, and so led myself down that direction, but now I have no intention of doing so.

    To be clear, to host this site previously, I had:

    1. Provisioned ec2 instances, iam roles, ebs volumes, etc, via Terraform. I had to debug issues with servers not provisioned properly, and to make sure I understood how everything worked, I would destroy and recreate the entire set up several times to make sure it was absolutely turnkey.

    2. Bootstrapped the server by crafting Ansible files that would install nginx, spam blocking tools like fail2ban, and automated cert renewal with LetsEncrypt. Fought with Ubuntu versions not being supported mid development, and the differences between them.

    I haven’t even gone into some of my projects, which included bootstrapping a similiar ec2 instances but running a docker daemon with Selenium containers, all struggling on an t2.nano instance as I was too cheap to pay for more. I didn’t feel like running apps in 2018 ought to cost more than a couple bucks a month. I think thats true, but in return I had to move higher up the virtualization stack.

    I had already been relying on a lot of AWS infrastructure, but up till now I still felt I was in control over what was happening on my boxes, and if so, I could move compute resources to another cloud provider and it would work. But with the shift to s3 and cloudfront that changed.

    This site is generated with Hugo, and there is no reason a static site should need to be served with nginx:) However, I was learning, and none of that was wasted. I’m lucky that AWS s3 makes website hosting super easy. However, it does not support https, which is a show stopper for me. To get https to work, I need to provision a Cloudfront distribution which serves the s3 bucket. Naturally, this was all done through Terraform.

    I don’t know what my AWS bill is going to look like, but I expect it to be 0, since this site does not get nearly enough traffic to exceed the Free tier. Out the window went all of my nginx config files and aws policies that I had created earlier. My site is cheaper and much faster, but in the process I’m sucked a litle deeper into the vortex of the AWS ecosystem…what do they call it, causal pleasure?


  • Hoisting JS

    I’m going through Joel Martin’s amazing teaching tool, mal, which teaches you how to write your own Lisp from scratch. I wanted to get a little better at Javascript, and so chose to use it as my language of implementation.

    I’m really glad I’m writing Javascript in the realm of ES6, but there was a bit of hoisting that definitely took me by surprise.

    
    function EVAL(ast, env) {
      ...
      switch (ast[0]) {
        case "def!":
          [__, first, second] = ast
          let value = EVAL(second, env)
          ...
          return value
        case "let*":
          [__, first, second] = ast
          ...
          return EVAL(second, newEnv)
        ...
    
    ...
    

    It turns out that even with the array destructuring, the variable is hoisted into the global context. I only detected this issue with a def that was nested in a let. In those situations, the variable ‘second’ would be overidden in the nested call, so that it would actually change when it returned to the caller.

    If only I had remembered to enable strict mode


  • Poetry at SFPC

    Last year I was part of an independent art school in NYC called SFPC. A lot of people have since asked me what it is, and what it means to be in such a school. The site has a blurb on what they do, but that description still leaves me and the people I explain it to slightly puzzled.

    So what is SFPC? It is a school that teaches computation in the service of art. It strives to give students the tools to express themselves in the medium of computation. I think the choice of the word computation is deliberate. It encompasses more than software, and to me, is wider than the word ‘algorithm’, but it also implies some sort of mechanization. It also teaches students to look at technology critically.

    So what really is SFPC? I think for artists, who are used to expression, it can be a way of learning the ’engineering’ aspects of working with hardware and software, much like how photographers might eventually have to learn the intracacies of zone metering systems. For engineers like myself, it was about learning about how artists see the world, and how to evaluate things qualitatively as opposed to quantitatively.

    The artist Jer Thorp once said to me, “I’m allergic to outcomes”. I had asked him how he knew his St. Louis Map Room project was successful, and what metrics he was interested in in measuring its success. I highly recommend everyone to check out his blog post on it. I think artists believe, rightfully or wrongfully, that the process of asking questions will eventually lead to the right answers - the caveat being that the right questions are being asked, by the right people. Implicit in that is that there is no Right answer. This can be something pretty hard for an engineer to stomach.

    10 weeks is a short time to unlearn some deeply ingrained ways of thinking.