OCTOBER 17, 2018

This year, the ACADIA conference is taking place at UNAM's Facultad de Arquitectura, Mexico City. As part of the Talk to a Wall workshop, Cristobal Valenzuela (@c_valenzuelab) talked about his work on RunwayML, ml5js, and a lot of what's going on at the moment on the field of artificial intelligence, machine learning, and deep learning.

Along his definition of artificial intelligence, "[The] simulation of intelligent behavior in computers," he shared the following quotes of some of the most relevant researchers of artificial intelligence over the last years.

Models for Thinking, Perception, Action.

—Patrick H. Winston, MIT

Many things can be AI, including simple programming. AI is the automation of thought.

—François Chollet, researcher and author of Keras

A field of study that gives computers the ability to learn without being explicitly programmed.

—Arthur Samuel, MIT. Samuel Checkers, 1957

If you're interested in artificial intelligence and machine learning, you should definitely follow @c_valenzuelab, @ml5js, and @runwayml.

SEPTEMBER 26, 2018

Lobe is a web-based visual programming language to create and deploy machine learning models, founded in 2015 by Mike Matas, Adam Menges, and Markus Beissinger "to make deep learning accessible to everyone," recently acquired by Microsoft.

Lobe is an easy-to-use visual tool that lets you build custom deep learning models, quickly train them, and ship them directly in your app without writing code.

I saw a live demo at SmartGeometry earlier this year and I can't wait to play with it once its deployed on Microsoft's servers.

You can see a few examples at Lobe.ai. (They're looking for people to join their team.)


Watch this video to see examples of things people have built using Lobe and how to build your own custom deep learning models.

SEPTEMBER 25, 2018

From png to jpg.

mogrify -format jpg *.png

From jpg to png.

mogrify -format png *.jpg

AUGUST 28, 2018

In TypeScript, as in other languages, Array.map allows you to apply a function to each of the items on a list or array. You can either pass an existing function which will take each of the items as its input parameter (say, the existing Math.sqrt function, or one that you define).

let list = [0, 1, 2, 3]; // [0, 1, 2, 3]
list.map(Math.sqrt); // [ 0, 1, 1.414.., 1.732.. ]

Or you can also define a lambda function on-the-go.

let list = [0, 1, 2, 3]; // [0, 1, 2, 3]
list.map((value, key, all) => {
  list[key] = value * 2;
}); // [ 0, 2, 4, 6]

AUGUST 19, 2018

There is a nifty way to specify the way in which you want each of the pages (or Laravel routes) of your site to be indexed by search engines. In my case, I looked Robots meta tag and X-Robots-Tag HTTP header specifications to learn more about what was possible.

In short, you might tell Google a specific route or page has "no restrictions for indexing or serving" by setting the X-Robots-Tag HTTP header to all or, on the contrary, tell it to stop indexing (or saving cached versions of a page) with the noindex value.

In Laravel, the guys at Spatie made it really easy. Just install their spatie/laravel-robots-middleware composer package on your Laravel app with:

composer require spatie/laravel-robots-middleware

Let's see a few examples on how to use this.

Allow every single page to be indexed and served

Create a new middleware in your application.

// app/Http/Middleware/MyRobotsMiddleware.php

<?php
namespace App\Http\Middleware;
use Illuminate\Http\Request;
use Spatie\RobotsMiddleware\RobotsMiddleware;

class MyRobotsMiddleware extends RobotsMiddleware
{
    /**
     * @return string|bool
     */
    protected function shouldIndex(Request $request)
    {
        return 'all';
    }
}

And then register your new in the middleware stack.

// app/Http/Kernel.php

class Kernel extends HttpKernel
{
    protected $middleware = [
        // ...
        \App\Http\Middleware\MyRobotsMiddleware::class,
    ];

    // ...
}

Forbid every single from being indexed, cached, and served

// app/Http/Middleware/BlockAllRobotsMiddleware.php

<?php
namespace App\Http\Middleware;
use Illuminate\Http\Request;
use Spatie\RobotsMiddleware\RobotsMiddleware;

class BlockAllRobotsMiddleware extends RobotsMiddleware
{
    /**
     * @return string|bool
     */
    protected function shouldIndex(Request $request)
    {
        return 'noindex';
    }
}

Conditional robots middleware

Probably, the most interesting application of this middleware is to embed more intelligent logic to avoid indexing specific pages, but letting Google (and other search engines) crawl the pages you want to expose in search engines.

We could send a noindex header for our admin pages only, for instance.

// app/Http/Middleware/SelectiveRobotsMiddleware.php

<?php
namespace App\Http\Middleware;
use Illuminate\Http\Request;
use Spatie\RobotsMiddleware\RobotsMiddleware;

class SelectiveRobotsMiddleware extends RobotsMiddleware
{
    protected function shouldIndex(Request $request) : string
    {
        if ($request->segment(1) === 'admin') {
            return 'noindex';
        }
        return 'all';
    }
}

Remember that you need to add all of your new middlewares to the app/Http/Kernel.php file in order for them to be called before each request. This method can be handing to block search indexing with noindex or to customize the way search engines are allow to process your pages. Here are other directives you can use in the x-robots-tag HTTP header and what they mean.

  • all - There are no restrictions for indexing or serving. Note: this directive is the default value and has no effect if explicitly listed.
  • noindex - Do not show this page in search results and do not show a "Cached" link in search results.
  • nofollow - Do not follow the links on this page
  • none - Equivalent to noindex, nofollow
  • noarchive - Do not show a "Cached" link in search results.
  • nosnippet - Do not show a text snippet or video preview in the search results for this page. A static thumbnail (if available) will still be visible.
  • notranslate - Do not offer translation of this page in search results.
  • noimageindex - Do not index images on this page.
  • unavailable_after: [RFC-850 date/time] - Do not show this page in search results after the specified date/time. The date/time must be specified in the RFC 850 format.

Thanks!

I hope you found this useful. Feel free to ping me at @nonoesp or join the mailing list. Here are some other Laravel posts and code-related posts.

JULY 31, 2018

ViveTrack is a DynamoBIM package that allows real-time reading of HTC Vive spatial tracking data, developed by Jose Luis García del Castillo y López (@garciadelcast) at the Generative Design Group at Autodesk.

JULY 15, 2018

Simplified Facebook Login Screen

Facebook's homepage always offers you to, mainly, sign up and create a new account. But you only have to do that once. Every single time you access Facebook afterwards, you probably just want to log in. With the following steps, you'll be able to hide everything but the login form.


This workflow overrides the styling of some website elements to hide them, and you just need to paste the following code inside the Stylebot Chrome extension when you have Facebook.com open in your browser. It will just hide the HTML elements that clutter your screen and leave a clean interface for you to sign in.

#pagelet_video_home_suggested_for_you_rhc,
#createNav,
#appsNav,
#pageFooter,
.fb_logo,
.pvl,
.login_form_label_field {
    display: none;
}

How to install Stylebot (and apply this style to Facebook.com)

  • Open this page on Google Chrome.
  • Click on Add to Chrome.
  • Go to Facebook.com
  • Open Stylebot by clicking the CSS icon you just installed, in your browser's top-right panel.
  • Then select Open Stylebot...
  • Paste the code snippet in the text editor.
  • Press Save.

Beware that, as Facebook updates their CSS class names (this is, the way the name the code that styles their website), this code will need to accommodate the user interface changes.

APRIL 5, 2018

To import JSON into your TypeScript code, you need to add the following code to a typings file (a file with a name like *.d.ts, say, json.d.ts—but it does not necessarily need to say json)1.

// This will allow you to load `.json` files from disk

declare module "*.json"
{ const value: any;
  export default value;
}

// This will allow you to load JSON from remote URL responses

declare module "json!*"
{ const value: any;
  export default value;
}

After doing this, you can do the following in TypeScript.

import * as graph from './data/graph.json';
import data from "json!http://foo.com/data_returns_json_response/";

You can then use graph and data as JSON objects in your TypeScript code.


I used this code to load a Dynamo JSON graph into TypeScript — just change the .dyn extension to .json and it will work with this code.

MARCH 16, 2018

Hey! Jose Luis and I will be running a workshop called Mind Ex Machina at the forthcoming SmartGeometry conference in Toronto (May 7–12, 2018). We will be exploring the creative potential of human-robot interfaces with machine intelligence. You should come!

What is SmartGeometry?

SmartGeometry is a bi-annual workshop and conference which "[gathers] the global community of innovators and pioneers in the fields of architecture, design, and engineering."

Each year, the event takes place at a location around the world (previous locations include Gothenburg, Hong Kong, London, or Barcelona) and features a challenge to be tackled by each of the ten "clusters" that conform the Conference’s workshops.

This year's challenge—Machine Minds—will take place at the University of Toronto, Canada, May 7–12, 2018. The four-day workshop, May 7–10, will be followed by a two-day conference, May 11–12.

What are we doing?

As mentioned before, this year, Jose Luis García del Castillo and I are leading the Mind Ex Machina cluster, which will explore the possibilities of creative human-robot interactions with the use of machine intelligence. Here is a more detailed description of our cluster's goals.

Robot programming interfaces are frequently developed to maximise performance, precision and efficiency in manufacturing environments, using procedural deterministic paradigms. While this is ideal for engineering tasks, it may become constraining in design contexts where flexibility, adaptability and a certain degree of indeterminacy are desired, in order to favour the exploratory nature of creative inquiry. This workshop will explore the possibilities of goal-oriented, non-deterministic real-time robot programming through Machine Intelligence (machine learning and artificial intelligence) in the context of collaborative design tasks. We argue that these new paradigms can be particularly fit for robot programming in creative contexts, and can help designers overcome the high entry barrier that robot programming typically features. Participants will be encouraged to explore this possibility through the conception and implementation of machine intelligence-aided interfaces for human-robot collaborative tasks.

Why should you come?

Machine intelligence is becoming ubiquitous, and slick, complex mathematical models are being developed (and open sourced) to provide our machines with pieces of intelligence to perform a wide variety of tasks (from object or face or speech recognition to image style transfer, drawing, or even music composition).

It is our responsibility as architects, designers, and engineers, to envision how we will use these technologies in our own field, to explore new paradigms of interaction and discover their role in our creative processes.


Cluster applications for SmartGeometry 2018 are still open. (There are only a few spots left!) Take a look at all different clusters and sign up here. You can also keep track of our cluster's work on our private mailing list.

SEPTEMBER 16, 2017

To make sure your Laravel application doesn't break when you are applying changes to your database, it's a good practice to check wether a table exists or not before doing any calls.

\Schema::hasTable('users');

MAY 20, 2017

For the last four months, I've been working on my master's thesis—Suggestive Drawing Among Human and Artificial Intelligences—at the Harvard Graduate School of Design. You can read a brief summary below.

The publication intends to explain what Suggestive Drawing is all about, with a language that, hopefully, can be understood by artists, designers, and other professionals with no coding skills.

You can read the interactive web publication or download it as a PDF.

For the tech-savvy, and for those who would like to dive in and learn more about how the working prototype of the project was developed, I'm preparing a supplemental Technical Report that will be available online.


A Brief Summary

We use sketching to represent the world. Design software has made tedious drawing tasks trivial, but we can't yet consider machines to be participants of how we interpret the world as they cannot perceive it. In the last few years, artificial intelligence has experienced a boom, and machine learning is becoming ubiquitous. This presents an opportunity to incorporate machines as participants in the creative process.

In order to explore this, I created an application—a suggestive drawing environment—where humans can work in synergy with bots1 that have a certain character, with non-deterministic and semi-autonomous behaviors. The project explores the user experience of drawing with machines, escapes the point-and-click paradigm with a continuous flow of interaction, and enables a new branch of creative mediation with bots that can develop their own aesthetics. A new form of collective creativity in which human and non-human participation results in synergetic pieces that express each participant's character.

In this realm, the curation of image data sets for training an artificially intelligent bot becomes part of the design process. Artists and designers can fine tune the behavior of an algorithm by feeding it with images, programming the machine by example without writing a single line of code.

Drawing Among Humans and Machines

The application incorporates behavior—humans and bots—but not toolbars, and memory—as it stores and provides context for what has been drawn—but no explicit layer structure. Actions are grouped by spatial and temporal proximity that dynamically adjusts in order not to interrupt the flow of interaction. The system allows users to access from different devices, and also lets bots see what we are drawing in order to participate in the process. In contrast to interfaces of clicks and commands, this application features a continuous flow of interaction with no toolbars but bots with behavior. What you can see in the following diagram is a simple drawing suggestion: I draw a flower and a bot suggests a texture to fill it in. In this interface, you can select multiple human or artificial intelligences with different capabilities and delegate tasks to them.

User Interface and Sample Drawing Suggestion

Suggestive Drawing Bots

I developed three drawing bots—texturer, sketcher, and continuator—that suggest texture, hand-sketched detail, or ways to continue your drawings, respectively. Classifier recognizes what you are drawing, colorizer adds color, and rationalizer rationalizes shapes and geometry. Learner sorts drawings in order to use existing drawings for training new bots according to a desired drawing character, allowing the artist to transfer a particular aesthetic to a given bot. In training a bot, one of the biggest challenges is the need to either find or generate an image data set from which bots can learn.

Onward

This project presents a way for artists and designers to use complex artificial intelligence models and interact with them in familiar mediums. The development of new models—and the exploration of their potential uses—is a road that lies ahead. As designers and artists, I believe it is our responsibility to envision and explore the interactions that will make machine intelligence a useful companion in our creative processes.


Thanks so much for reading.


  1. According to the English Oxford Dictionary, a bot is an autonomous program on a network (especially the Internet) which can interact with systems or users. ↩︎

MAY 4, 2017

I'm one week away from my master’s thesis presentation—Suggestive Drawing Among Human and Artificial Intelligences—which will take place on Wednesday May 10 at 11:20 am at the Harvard Graduate School of Design, room 123.

Suggestive Drawing Countdown

This teaser page features a countdown with an illustration of suggestive drawing bots—artificially-intelligent bots that help you draw.

Take a look and subscribe if you want to be notified when the project is released. (You can also just check nono.ma/ai in 7 to 10 days.)

MARCH 2, 2017

When using Laravel, it is common to sort Eloquent models obtained with the query builder by calling ->orderBy('created_at', 'DESC'), for instance. But this is not always possible when arranging an Eloquent Collection (Illuminate\Database\Eloquent\Collection). To do this, we need to pass a sorting closure to the ->sortBy() method. (An example would be that our collection has the property order.) In that case, we could just call the following:

$items = $items->sortBy(function($item) {
  return -$item->order;
});

NOVEMBER 21, 2016

You might, as I did, find yourself willing to give a presentation on Processing1 sharing some of your code on screen. And I found that there is an extremely simple workaround to copy the text with its original format right into your slide.

Let's look at the steps.

First, you need to select the fragment of Processing code you want on your slide by making a selection of that text, right-clicking, and then choosing "Copy as HTML." (The code should now be stored in the clipboard.)

Next, open a code editor (such as Atom or Sublime Text) and paste the copied HTML. If you see something like the following image, go ahead and save it as an HTML file. (Something like code.html will work as a name.)

HTML code on Atom for macOS.

Now just drag your HTML file to Safari and the code should appear properly formatted in the browser. The last step is to select, copy, and paste the code from Safari into your Keynote slide. You should now have formatted Processing code into your Keynote slide. You can edit the font size or any other parameters in Keynote, but it's nice to get the colors and the font displayed directly as in Processing.

(So far, I've tested this workflow with Processing 3.0 and it works.)


  1. Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. There are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning and prototyping. (Processing.org) ↩︎

AUGUST 13, 2016

After learning how to implement vinkla/instagram and larabros/elogram into Laravel, I discovered that they are only needed if you want to interact with the private API of Instagram to authenticate a user and perform actions with its account (e.g. like images, post content, comment, etc.).

If you — like me — just want to obtain the URL of Instagram media, such as an image, you can do the following with public access (no authentication or access_token needed), which, compared to services like Flickr, offer a pretty low resolution of just 1080 by 1080 pixels.

So, let's get to it.

Instagram provides three different sizes of each image you upload, i.e., thumbnail, medium, and large. If you were to access those images of the following picture on Instagram, https://www.instagram.com/p/BI0BQkfh0ed, you would do as follows.

// Thumbnail
https://www.instagram.com/p/BI0BQkfh0ed/media/?size=t
// Medium
https://www.instagram.com/p/BI0BQkfh0ed/media/?size=m
// Large
https://www.instagram.com/p/BI0BQkfh0ed/media/?size=l

The last letters—t, m, and l—represent the size of the image. These links redirect to the Instagram media's URL of the size you specify, and can be used as the src attribute on an HTML img, or as a way to download Instagram pictures.

JUNE 15, 2015

Laravel is a powerful PHP framework. Like many other PHP systems, it benefits from the use of Composer to manage its dependencies, making the use of open-source libraries extremely simple.

What follows is a list of the packages (or dependencies) I am using in most of my current projects—let me explain you why.

dimsav/laravel-translatable

A package I discovered a few weeks ago. It gives your Laravel app the possibility of adding translations for your SQL tables to various different languages, with a really flexible structure. You can, for instance, have the articles in your blog written in English by default, and only translate to certain languages the ones you want.

Then, you can show those translated languages for users that have selected that locale on their browser, but fallback to the default language if an article is not available in their language.

Check it on Github.

jenssegers/laravel-date

A package based in Carbon. It makes ridiculously simple working with dates, with support for all languages.

Some of the features I use the most are: parsing database dates to human-readable ones (2015-06-14 could be translated to Sunday 14, June 2015); expressing how long ago a content was created (posted 2 minutes ago, for instance); calculating dates in the past or in the future, by adding or substracting days, weeks, months (or whatever unit) to a date object.

Possibilities are unlimited, and this library makes it even easier that before.

Check it on Github.

rtconner/laravel-tagging

With this package, you can use the Taggable trait to any of your models, and start tagging them. Then, you can use the query builder with its own methods to filter your content depending on tags.

I have been using it for articles and projects, to organize content and allow users to navigate by article categories.

Check it on Github.

panique/laravel-sass

If you are designing with SCSS, you need a parser o automate the generation of your CSS files. This package does the job for me pretty well.

It allows you to run inline PHP functions specifying what SCSS folder to parse, and where to save the CSS. Also, it has an in-built function to minify your CSS files, compressing them a lot, so you don’t have to worry about it.

For development purposes, I tend to set a GET variable on the App::before() filter function (located on app/filters.php) to force generate new CSS files. Running the URL /home/?scss=1, for instance, would regenerate all my CSS files.

Check it on Github

vtalbot/markdown

Based on Michelf’s PHP parser, this package implements methods to parse Markdown text from strings or files directly.

One example would be calling Markdown::string($string) in your code to parse the $string from markdown to HTML.

Check it on Github

If you know other PHP packages that I should know of, please drop me a tweet! Thanks for reading.

MARCH 16, 2015

Following up with the Scripting In Rhino Python series of articles, here is a useful snippet of code that automates switching units on a Rhino document.

This script basically switches the units of the current document between meters and millimeters. The important function is rs.UnitSystem(), which returns the current unit measure — or sets it if we give parameters to the function.

import rhinoscriptsyntax as rs

if rs.UnitSystem() == 2:
    # Current unit is mm, Switch to m
    rs.UnitSystem(4, False, True)
else:
    # Current unit is m, switch to mm
    rs.UnitSystem(2, False, True)

What's Next

This article is part of a series of posts about efficient architectural methods, workflows and tools, titled Getting Architecture Done.

If you want to be notified when any other articles of the same series are posted, go ahead an subscribe here.

MARCH 6, 2015

Without any doubt, being able to code is an advantage over those who do not know how to. At least, it brings new opportunities on how to do things [that are usually a pain] in an easier way.

What follows is a list of three ways in which coding will probably make your life easier, providing automation, precision, and customization.

Code Is Automation

Numerous tasks of daily work are repetitive; and computers are the best at performing repetitive tasks. Small scripts and pieces of code can save tiny bits of your time or, frequently, a lot — mainly when we need simple but repetitive tasks that can be automated in simple ways.

Code Is Precision

Apart from doing things for us, code is based on algorithms, which means that a program knows the logic behind how to solve an specific problem really really well, and with great accuracy. The human brain gets tired and tends to think slowly after continuous hours of work. In the other hand, computers may heat, but they won’t miscalculate an operation if an algorithm was properly coded, which means that machines won’t do mistakes (well, we all know that complex and big systems frequently bring with them a lot of bugs, but that is not the case in small easy-to-do tasks).

Code Is Customization

Everyday, new apps and software is released for our computers and smartphones, providing new or improved functionality. It is great, but we are all dependent on what others decide can be useful for us. If you can code, this brings a great opportunity: you can program an app that nobody thought you needed to solve a personal need. And, maybe, someone else may be willing to pay for it, as others with your same need can find it useful.

What’s Next?

The first step to learn how to coding is to start. To read how to write a simple line of code, seeing what it does, and learning from there, give this a try.

Want to see older publications? Visit the archive.

Listen to my Podcast.