JULY 24, 2019

This example uses regular expressions (RegExp) to replace multiple appearances of a substring in a string.

const string = `The car is red. The car is black.`;

const replacedString = string.replace(/car|is/g, "·····");

console.log(replacedString);
// returns The ····· ····· red. The ····· ····· black.

Before you go

If you found this useful, you might want to join my newsletter; or take a look at other posts about code, TypeScript, and React.

JULY 24, 2019

Here is an example of nested Promise.all() calls. We are using the Fetch API to load a given path or URL, then requesting the arrayBuffer() of each of the responses we get back. This is a trivial problem if we do it all asynchronously, but we want to do something with the output buffers when we have them all available, and not one by one.

Specifically, this code tries to (1) fetch an array of images; (2) get their array buffers; (3) then obtain their base64 representation. In essence, map an array of images (by providing their paths or URLs) to their corresponding base64 string.


While this technique works in both TypeScript and JavaScript, the code is only shown in TypeScript.

Approach 1: Verbose

const images = [/* Array of image URLs (or local path if running in Electron) */]

Promise.all(images.map((url) => fetch(url))).then((responses: any) => {

    return Promise.all(responses.map((res: Response) => res.arrayBuffer())).then((buffers) => {
        return buffers;
    });

}).then((buffers: any) => {

    return Promise.all(buffers.map((buffer: ArrayBuffer) => {
        return this.arrayBufferToBase64(buffer);
    }));

}).then((imagesAsBase64: any) => {

    // Do something with the base64 strings
    window.console.log(imagesAsBase64);

});

Approach 2: Simplified

const layerImages = [/* Array of image URLs (or local path if running in Electron) */]

Promise.all(layerImages.map((url) => fetch(url))).then((responses: any) => {

    return Promise.all(responses.map((res: Response) => res.arrayBuffer())).then((buffers) => {
        return buffers.map((buffer) => this.arrayBufferToBase64(buffer));
    });

}).then((imagesAsBase64: any) => {

    // Do something with the base64 strings
    window.console.log(imagesAsBase64);

});

Array Buffer to base64

// source: stackoverflow.com
private arrayBufferToBase64(buffer: any) {
    let binary = "";
    const bytes = [].slice.call(new Uint8Array(buffer));
    bytes.forEach((b: any) => binary += String.fromCharCode(b));
    // Inside of a web tab
    return window.btoa(binary);
}

Before you go

If you found this useful, you might want to join my newsletter; or take a look at other posts about code, TypeScript, and React.

JUNE 10, 2019

#!/bin/bash

#!make
include .env
export $(shell sed 's/=.*//' .env)

MAY 17, 2019

Yay! Visual Studio just announce a preview release of "extension [that] let you work with [Visual Studio] Code over SSH on a remote machine or VM, in Windows Subsystem for Linux (WSL), or inside a Docker container."

Go ahead and install the Remote Development extension. "An extension pack that lets you open any folder in a container, on a remote machine, or in WSL and take advantage of VS Code's full feature set."

MAY 15, 2019

A while ago we had the need to grab the App window variable (exposed by CefSharp) and we were extending the Window interface to do that. There seems to be a better way to get variables that are define in the Window environment.

I learned this from this link An advanced guide on how to setup a React and PHP.

If you are defining a variable or object you want to read from React (like in CefSharp, o directly in the HTML like in the screenshot)

// inside of your app entry HTML file's header
<script>
var myApp = {
  user: "Name",
  logged: true
}
</script>

You can do a declare module 'myApp' in index.d.ts, then add the myApp variable as a external library in Webpack's config file

externals: {
  myApp: `myApp`,
},

Then you can import as if it was a module in TypeScript (or JavaScript files) with React

import myApp from 'myApp';

And you can even use TypeScript destructuring technique to get internal properties directly

const { user: { name, email}, logged } = myApp;

Before you go

If you found this useful, you might want to join my newsletter; or take a look at other posts about code, TypeScript, and React.

FEBRUARY 25, 2019

In trying to export my React's Redux store from index.tsx to be used somewhere else outside of the React application, I was getting an Invariant Violation: Target container is not a DOM element error while running Jest tests (with Enzyme and Webpack) in the App component (App.tsx).

I found a solution to this error for my use case, which was using the same Redux store React is using outside of React.

The error

The initial code that didn't work when testing React looked like this.

// index.tsx

import * as React from "react";
import { render } from "react-dom";
import { Provider } from "react-redux";
import { applyMiddleware, compose, createStore } from "redux";
import App from "./components/App";
import { rootReducer } from "./store/reducers";
import { initialState } from "./store/state";

const middlewares = [];

export const store = createStore(
    rootReducer,
    initialState,
    compose(applyMiddleware(...middlewares)),
);

render(
    <Provider store={store}>
        <App />
    </Provider>,
    document.getElementById("root"),
);

The solution

Separate the Redux store logic into a new file named store.ts, then create a default export (to be used by index.tsx, i.e., the React application) and a non-default export with export const store (to be used from non-React classes), as follows.

// store.ts

import { applyMiddleware, compose, createStore } from "redux";
import logger from "redux-logger";
import { rootReducer } from "./store/reducers";
import { initialState } from "./store/state";

const middlewares = [];

export const store = createStore(
    rootReducer,
    initialState,
    compose(applyMiddleware(...middlewares)),
);

export default store;
// updated index.tsx

import * as React from "react";
import { render } from "react-dom";
import { Provider } from "react-redux";
import App from "./components/App";
import store from "./store";

render(
    <Provider store={store}>
        <App />
    </Provider>,
    document.getElementById("root"),
);

Using the Redux store in non-React classes

// MyClass.ts

import { store } from "./store"; // store.ts

export default class MyClass {
  handleClick() {
    store.dispatch({ ...new SomeAction() });
  }
}

The default export

A small note before you go. Here is how to use the default and the non-default exports.

  • default export store; is used with import store from "./store";
  • export const store = ... is used with import { store } from "./store";

Before you go

If you found this useful, you might want to join my newsletter; or take a look at other posts about code, TypeScript, and React.

DECEMBER 18, 2018

Here are my highlights of Deep dive into Electron’s main and renderer processes by Cameron Nokes.

[Each of these processes is] an operative system level process, or as Wikipedia puts it "an instance of a computer program that is being execute."

[…] Each of these processes run concurrently to each other. […] [M]emory and resources are isolated from each other. […] The two processes don't share memory or state.

Why multiple processes?

Chromium runs each tab in a separate process so that if one tab runs into a fatal error, it doesn't bring down the entire application. […] "Chromium is built like an operative system, using multiple OS processes to isolate web sites from each other and from the browser itself."

Main process

[I]s responsible for creating and managing BrowserWindow instances and various application events. It can also register global shortcuts, creative native menus and dialogs, respond to auto-update events, and more. Your app's entry point will point to a JavaScript file that will be executed in the main process. A subset of Electron APIs are available in the main process, as well as all Node.js modules. The docs state: “The basic rule is: if a module is GUI or low-level system related, then it should be only available in the main process.” (Note that GUI here means native GUI, not HTML based UI rendered by Chromium). This is to avoid potential memory leak problems.

Renderer process

The render process is responsible for running the user-interface of your app, or in other words, a web page which is an instance of webContents. All DOM APIs, Node.js APIs, and a subset of Electron APIs are available in the renderer. […] [O]ne or more webContents can live in a single window […] because a single window can host multiple webviews and each webview is its own webContents instance and renderer process.


See this Venn diagram of Electron (provided by the source).


How do I communicate between processes?

Electron uses interprocess communication (IPC) to communicate between processes—same as Chromium. IPC is sort of like using postMessage between a web page and an iframe or webWorker […] you send a message with a channel name and some arbitrary information. IPC can work between renderers and the main process in both directions. IPC is asynchronous by default but also has synchronous APIs (like fs in Node.js).

Electron also gives you the remote module, which allows you to, for example, use a main process module like Menu as if it were available in the renderer. No manual IPC calls [are] necessary, but what's really going on behind the scenes is that you are issuing commands to the main process via synchronous IPC calls. (These can be debugged with devtron.)

Can I make something work in both the main and renderer?

Yes, because main process APIs can be accessed through remote, you can do something like this:

const electron = require('electron');
const Menu = electron.Menu || electron.remote.Menu;

//now you can use it seamlessly in either main or renderer

console.log(Menu);

(See the full thing.)

DECEMBER 7, 2018

Here's a note on how to display dialogs, alerts, and notifications on macOS with AppleScript, useful to automate day-to-day tasks you do with your machine, or even create complex programs.

(To the uninitiated, you would run this code by opening the AppleScript Editor (on macOS), pasting the code there, and hitting run.)

Dialog and Alert1

display alert "This is an alert" buttons {"No", "Yes"}
if button returned of result = "No" then
    display alert "No was clicked"
else
    if button returned of result = "Yes" then
        display alert "Yes was clicked"
    end if
end if

System notification

display notification "Have a simple day!"

NOVEMBER 20, 2018

After cloning a repository, you can have git not track changes you make to one (or multiple) files.

Tell Git to assume a file is unchanged

git update-index --assume-unchanged file

Tell Git not to assume a file is unchanged anymore

After this command is run, the repository continues tracking the file. It might have changes that git will want to commit.

git update-index --no-assume-unchanged file

Roll back the changes you made while the file was --assume-unchanged

In case you made changes while --assume-unchanged was on and don't want to keep the changes on the file: Roll-back to where the repository is when you want to pull or push changes.

git checkout -- file

NOVEMBER 14, 2018

for...of1

const numbers = [1, 3, 100, 24];
for (const item of numbers) {
  console.log(item); // 1, 3, 100, 24
}

Before you go

If you found this useful, you might want to join my newsletter; or take a look at other posts about code, TypeScript, and React.

NOVEMBER 9, 2018

GitHub just released GitHub Actions, to "automate your workflow from idea to production." Their slogan:

Focus on what matters: code

Here are some comments by Sarah Drasner on CSS-Tricks:

Previously, there were only few options here that could help with that. You could piece together other services, set them up, and integrate them with GitHub. You could also write post-commit hooks, which also help.

[...]

Actions are small bits of code that can be run off of various GitHub events, the most common of which is pushing to master. But it's not necessarily limited to that. They’re all directly integrated with GitHub, meaning you no longer need a middleware service or have to write a solution yourself. And they already have many options for you to choose from. For example, you can publish straight to npm and deploy to a variety of cloud services, (Azure, AWS, Google Cloud, Zeit... you name it) just to name a couple.

But actions are more than deploy and publish. That’s what’s so cool about them. They’re containers all the way down, so you could quite literally do pretty much anything — the possibilities are endless! You could use them to minify and concatenate CSS and JavaScript, send you information when people create issues in your repo, and more... the sky's the limit.

You also don’t need to configure/create the containers yourself, either. Actions let you point to someone else’s repo, an existing Dockerfile, or a path, and the action will behave accordingly. This is a whole new can of worms for open source possibilities, and ecosystems.

Curious about how this all works? Take a look at CSS-Tricks tutorials:

Visit GitHub Actions.

NOVEMBER 3, 2018

Here are some notes I took while reading GitHub's An Introduction to innersource white paper.

Organizations worldwide are incorporating open source methodologies into the way they build and ship their own software. […]

Many companies use the word “innersource” to describe how their engineering teams work together on code. Innersource is a development methodology where engineers build proprietary software using best practices.

[…]

[I]nnersource code helps your team discover, customize, and reuse existing internal projects. They can also establish and build on a shared set of documented processes to optimize the way your company deploys and uses software. This can lead to lower cost, greater flexibility, and an end to vendor lock-in.

[…]

Within an enterprise, individual developers can pursue their interests, share ideas on a level playing field, and more easily learn from their peers. However, innersource also requires a cultural shift. Your team’s culture will need to encourage knowledge sharing and welcome contributions from across your organization. […] For innersource projects, distributing control across a smaller group of participants frequently makes approvals and reviews more effective. Creating a small, cross-functional team of decision makers can also help teams stick to quality standards and gain executive support.
Adopting innersource practices is like starting an open source community within your organization. As with open source, transparent collaboration mobilizes a community’s collective knowledge and skills to create better software. An innersource community, in contrast, contains the knowledge, skills, and abilities of people and tools within a single enterprise.

Why do companies adopt it?

As businesses evolve and differentiate their products and services with software and data—or recognize software and data is their product or service—they quickly realize that traditional development methods and tooling don’t quite work. The slow, systematic practice of gathering requirements, holding meetings, and developing in silos is not in step with the pace of technology today—or even the pace of customer demands.
Innersource helps teams build software faster and work better together—resulting in higher-quality development and better documentation. It also can help companies become more efficient by:

  • Making it easy to find and reuse code on a broad scale, avoiding wasted resources and duplication
  • Driving rapid development, regardless of company size
  • Reducing silos and simplifying collaboration throughout the entire organization—inside and between teams and functions, as well as across teams and business lines
  • Increasing clarity between engineers and management, as well as anyone else who’s interested
  • Creating a culture of openness, a precursor to open source participation
  • Reinforcing the pride, growth, and job satisfaction felt by team members who help wherever there is a need

OCTOBER 31, 2018

React Google Charts is "A thin, typed, React wrapper over Google Charts Visualization and Charts API." View the source on GitHub.

OCTOBER 17, 2018

This year, the ACADIA conference is taking place at UNAM's Facultad de Arquitectura, Mexico City. As part of the Talk to a Wall workshop, Cristobal Valenzuela (@c_valenzuelab) talked about his work on RunwayML, ml5js, and a lot of what's going on at the moment on the field of artificial intelligence, machine learning, and deep learning.

Along his definition of artificial intelligence, "[The] simulation of intelligent behavior in computers," he shared the following quotes of some of the most relevant researchers of artificial intelligence over the last years.

Models for Thinking, Perception, Action.

—Patrick H. Winston, MIT

Many things can be AI, including simple programming. AI is the automation of thought.

—François Chollet, researcher and author of Keras

A field of study that gives computers the ability to learn without being explicitly programmed.

—Arthur Samuel, MIT. Samuel Checkers, 1957

If you're interested in artificial intelligence and machine learning, you should definitely follow @c_valenzuelab, @ml5js, and @runwayml.

SEPTEMBER 26, 2018


Lobe is a web-based visual programming language to create and deploy machine learning models, founded in 2015 by Mike Matas, Adam Menges, and Markus Beissinger "to make deep learning accessible to everyone," recently acquired by Microsoft.

Lobe is an easy-to-use visual tool that lets you build custom deep learning models, quickly train them, and ship them directly in your app without writing code.

I saw a live demo at SmartGeometry earlier this year and I can't wait to play with it once its deployed on Microsoft's servers.

You can see a few examples at Lobe.ai. (They're looking for people to join their team.)


Watch this video to see examples of things people have built using Lobe and how to build your own custom deep learning models.

SEPTEMBER 25, 2018

From png to jpg.

mogrify -format jpg *.png

From jpg to png.

mogrify -format png *.jpg

AUGUST 28, 2018

In TypeScript, as in other languages, Array.map allows you to apply a function to each of the items on a list or array. You can either pass an existing function which will take each of the items as its input parameter (say, the existing Math.sqrt function, or one that you define).

let list = [0, 1, 2, 3]; // [0, 1, 2, 3]
list.map(Math.sqrt); // [ 0, 1, 1.414.., 1.732.. ]

Or you can also define a lambda function on-the-go.

let list = [0, 1, 2, 3]; // [0, 1, 2, 3]
list.map((value, key, all) => {
  list[key] = value * 2;
}); // [ 0, 2, 4, 6]

Before you go

If you found this useful, you might want to join my newsletter; or take a look at other posts about code, TypeScript, and React.

AUGUST 19, 2018

There is a nifty way to specify the way in which you want each of the pages (or Laravel routes) of your site to be indexed by search engines. In my case, I looked Robots meta tag and X-Robots-Tag HTTP header specifications to learn more about what was possible.

In short, you might tell Google a specific route or page has "no restrictions for indexing or serving" by setting the X-Robots-Tag HTTP header to all or, on the contrary, tell it to stop indexing (or saving cached versions of a page) with the noindex value.

In Laravel, the guys at Spatie made it really easy. Just install their spatie/laravel-robots-middleware composer package on your Laravel app with:

composer require spatie/laravel-robots-middleware

Let's see a few examples on how to use this.

Allow every single page to be indexed and served

Create a new middleware in your application.

// app/Http/Middleware/MyRobotsMiddleware.php

<?php
namespace App\Http\Middleware;
use Illuminate\Http\Request;
use Spatie\RobotsMiddleware\RobotsMiddleware;

class MyRobotsMiddleware extends RobotsMiddleware
{
    /**
     * @return string|bool
     */
    protected function shouldIndex(Request $request)
    {
        return 'all';
    }
}

And then register your new in the middleware stack.

// app/Http/Kernel.php

class Kernel extends HttpKernel
{
    protected $middleware = [
        // ...
        \App\Http\Middleware\MyRobotsMiddleware::class,
    ];

    // ...
}

Forbid every single from being indexed, cached, and served

// app/Http/Middleware/BlockAllRobotsMiddleware.php

<?php
namespace App\Http\Middleware;
use Illuminate\Http\Request;
use Spatie\RobotsMiddleware\RobotsMiddleware;

class BlockAllRobotsMiddleware extends RobotsMiddleware
{
    /**
     * @return string|bool
     */
    protected function shouldIndex(Request $request)
    {
        return 'noindex';
    }
}

Conditional robots middleware

Probably, the most interesting application of this middleware is to embed more intelligent logic to avoid indexing specific pages, but letting Google (and other search engines) crawl the pages you want to expose in search engines.

We could send a noindex header for our admin pages only, for instance.

// app/Http/Middleware/SelectiveRobotsMiddleware.php

<?php
namespace App\Http\Middleware;
use Illuminate\Http\Request;
use Spatie\RobotsMiddleware\RobotsMiddleware;

class SelectiveRobotsMiddleware extends RobotsMiddleware
{
    protected function shouldIndex(Request $request) : string
    {
        if ($request->segment(1) === 'admin') {
            return 'noindex';
        }
        return 'all';
    }
}

Remember that you need to add all of your new middlewares to the app/Http/Kernel.php file in order for them to be called before each request. This method can be handing to block search indexing with noindex or to customize the way search engines are allow to process your pages. Here are other directives you can use in the x-robots-tag HTTP header and what they mean.

  • all - There are no restrictions for indexing or serving. Note: this directive is the default value and has no effect if explicitly listed.
  • noindex - Do not show this page in search results and do not show a "Cached" link in search results.
  • nofollow - Do not follow the links on this page
  • none - Equivalent to noindex, nofollow
  • noarchive - Do not show a "Cached" link in search results.
  • nosnippet - Do not show a text snippet or video preview in the search results for this page. A static thumbnail (if available) will still be visible.
  • notranslate - Do not offer translation of this page in search results.
  • noimageindex - Do not index images on this page.
  • unavailable_after: [RFC-850 date/time] - Do not show this page in search results after the specified date/time. The date/time must be specified in the RFC 850 format.

Thanks!

I hope you found this useful. Feel free to ping me at @nonoesp or join the mailing list. Here are some other Laravel posts and code-related posts.

JULY 31, 2018

ViveTrack is a DynamoBIM package that allows real-time reading of HTC Vive spatial tracking data, developed by Jose Luis García del Castillo y López (@garciadelcast) at the Generative Design Group at Autodesk.

JULY 15, 2018

Simplified Facebook Login Screen

Facebook's homepage always offers you to, mainly, sign up and create a new account. But you only have to do that once. Every single time you access Facebook afterwards, you probably just want to log in. With the following steps, you'll be able to hide everything but the login form.


This workflow overrides the styling of some website elements to hide them, and you just need to paste the following code inside the Stylebot Chrome extension when you have Facebook.com open in your browser. It will just hide the HTML elements that clutter your screen and leave a clean interface for you to sign in.

#pagelet_video_home_suggested_for_you_rhc,
#createNav,
#appsNav,
#pageFooter,
.fb_logo,
.pvl,
.login_form_label_field {
    display: none;
}

How to install Stylebot (and apply this style to Facebook.com)

  • Open this page on Google Chrome.
  • Click on Add to Chrome.
  • Go to Facebook.com
  • Open Stylebot by clicking the CSS icon you just installed, in your browser's top-right panel.
  • Then select Open Stylebot...
  • Paste the code snippet in the text editor.
  • Press Save.

Beware that, as Facebook updates their CSS class names (this is, the way the name the code that styles their website), this code will need to accommodate the user interface changes.

APRIL 5, 2018

To import JSON into your TypeScript code, you need to add the following code to a typings file (a file with a name like *.d.ts, say, json.d.ts—but it does not necessarily need to say json)1.

// This will allow you to load `.json` files from disk

declare module "*.json"
{ const value: any;
  export default value;
}

// This will allow you to load JSON from remote URL responses

declare module "json!*"
{ const value: any;
  export default value;
}

After doing this, you can do the following in TypeScript.

import * as graph from './data/graph.json';
import data from "json!http://foo.com/data_returns_json_response/";

You can then use graph and data as JSON objects in your TypeScript code.


I used this code to load a Dynamo JSON graph into TypeScript — just change the .dyn extension to .json and it will work with this code.

Before you go

If you found this useful, you might want to join my newsletter; or take a look at other posts about code, TypeScript, and React.

MARCH 16, 2018

Hey! Jose Luis and I will be running a workshop called Mind Ex Machina at the forthcoming SmartGeometry conference in Toronto (May 7–12, 2018). We will be exploring the creative potential of human-robot interfaces with machine intelligence. You should come!

What is SmartGeometry?

SmartGeometry is a bi-annual workshop and conference which "[gathers] the global community of innovators and pioneers in the fields of architecture, design, and engineering."

Each year, the event takes place at a location around the world (previous locations include Gothenburg, Hong Kong, London, or Barcelona) and features a challenge to be tackled by each of the ten "clusters" that conform the Conference’s workshops.

This year's challenge—Machine Minds—will take place at the University of Toronto, Canada, May 7–12, 2018. The four-day workshop, May 7–10, will be followed by a two-day conference, May 11–12.

What are we doing?

As mentioned before, this year, Jose Luis García del Castillo and I are leading the Mind Ex Machina cluster, which will explore the possibilities of creative human-robot interactions with the use of machine intelligence. Here is a more detailed description of our cluster's goals.

Robot programming interfaces are frequently developed to maximise performance, precision and efficiency in manufacturing environments, using procedural deterministic paradigms. While this is ideal for engineering tasks, it may become constraining in design contexts where flexibility, adaptability and a certain degree of indeterminacy are desired, in order to favour the exploratory nature of creative inquiry. This workshop will explore the possibilities of goal-oriented, non-deterministic real-time robot programming through Machine Intelligence (machine learning and artificial intelligence) in the context of collaborative design tasks. We argue that these new paradigms can be particularly fit for robot programming in creative contexts, and can help designers overcome the high entry barrier that robot programming typically features. Participants will be encouraged to explore this possibility through the conception and implementation of machine intelligence-aided interfaces for human-robot collaborative tasks.

Why should you come?

Machine intelligence is becoming ubiquitous, and slick, complex mathematical models are being developed (and open sourced) to provide our machines with pieces of intelligence to perform a wide variety of tasks (from object or face or speech recognition to image style transfer, drawing, or even music composition).

It is our responsibility as architects, designers, and engineers, to envision how we will use these technologies in our own field, to explore new paradigms of interaction and discover their role in our creative processes.


Cluster applications for SmartGeometry 2018 are still open. (There are only a few spots left!) Take a look at all different clusters and sign up here. You can also keep track of our cluster's work on our private mailing list.

SEPTEMBER 16, 2017

To make sure your Laravel application doesn't break when you are applying changes to your database, it's a good practice to check whether a table exists or not before doing any calls.

\Schema::hasTable('users');

MAY 20, 2017

For the last four months, I've been working on my master's thesis—Suggestive Drawing Among Human and Artificial Intelligences—at the Harvard Graduate School of Design. You can read a brief summary below.

The publication intends to explain what Suggestive Drawing is all about, with a language that, hopefully, can be understood by artists, designers, and other professionals with no coding skills.

You can read the interactive web publication or download it as a PDF.

For the tech-savvy, and for those who would like to dive in and learn more about how the working prototype of the project was developed, I'm preparing a supplemental Technical Report that will be available online.


A Brief Summary

We use sketching to represent the world. Design software has made tedious drawing tasks trivial, but we can't yet consider machines to be participants of how we interpret the world as they cannot perceive it. In the last few years, artificial intelligence has experienced a boom, and machine learning is becoming ubiquitous. This presents an opportunity to incorporate machines as participants in the creative process.

In order to explore this, I created an application—a suggestive drawing environment—where humans can work in synergy with bots1 that have a certain character, with non-deterministic and semi-autonomous behaviors. The project explores the user experience of drawing with machines, escapes the point-and-click paradigm with a continuous flow of interaction, and enables a new branch of creative mediation with bots that can develop their own aesthetics. A new form of collective creativity in which human and non-human participation results in synergetic pieces that express each participant's character.

In this realm, the curation of image data sets for training an artificially intelligent bot becomes part of the design process. Artists and designers can fine tune the behavior of an algorithm by feeding it with images, programming the machine by example without writing a single line of code.

Drawing Among Humans and Machines

The application incorporates behavior—humans and bots—but not toolbars, and memory—as it stores and provides context for what has been drawn—but no explicit layer structure. Actions are grouped by spatial and temporal proximity that dynamically adjusts in order not to interrupt the flow of interaction. The system allows users to access from different devices, and also lets bots see what we are drawing in order to participate in the process. In contrast to interfaces of clicks and commands, this application features a continuous flow of interaction with no toolbars but bots with behavior. What you can see in the following diagram is a simple drawing suggestion: I draw a flower and a bot suggests a texture to fill it in. In this interface, you can select multiple human or artificial intelligences with different capabilities and delegate tasks to them.

User Interface and Sample Drawing Suggestion

Suggestive Drawing Bots

I developed three drawing bots—texturer, sketcher, and continuator—that suggest texture, hand-sketched detail, or ways to continue your drawings, respectively. Classifier recognizes what you are drawing, colorizer adds color, and rationalizer rationalizes shapes and geometry. Learner sorts drawings in order to use existing drawings for training new bots according to a desired drawing character, allowing the artist to transfer a particular aesthetic to a given bot. In training a bot, one of the biggest challenges is the need to either find or generate an image data set from which bots can learn.

Onward

This project presents a way for artists and designers to use complex artificial intelligence models and interact with them in familiar mediums. The development of new models—and the exploration of their potential uses—is a road that lies ahead. As designers and artists, I believe it is our responsibility to envision and explore the interactions that will make machine intelligence a useful companion in our creative processes.


Thanks so much for reading.


  1. According to the English Oxford Dictionary, a bot is an autonomous program on a network (especially the Internet) which can interact with systems or users. ↩︎

MAY 4, 2017

I'm one week away from my master’s thesis presentation—Suggestive Drawing Among Human and Artificial Intelligences—which will take place on Wednesday May 10 at 11:20 am at the Harvard Graduate School of Design, room 123.

Suggestive Drawing Countdown

This teaser page features a countdown with an illustration of suggestive drawing bots—artificially-intelligent bots that help you draw.

Take a look and subscribe if you want to be notified when the project is released. (You can also just check nono.ma/ai in 7 to 10 days.)

MARCH 2, 2017

When using Laravel, it is common to sort Eloquent models obtained with the query builder by calling ->orderBy('created_at', 'DESC'), for instance. But this is not always possible when arranging an Eloquent Collection (Illuminate\Database\Eloquent\Collection). To do this, we need to pass a sorting closure to the ->sortBy() method. (An example would be that our collection has the property order.) In that case, we could just call the following:

$items = $items->sortBy(function($item) {
  return -$item->order;
});

NOVEMBER 21, 2016

You might, as I did, find yourself willing to give a presentation on Processing1 sharing some of your code on screen. And I found that there is an extremely simple workaround to copy the text with its original format right into your slide.

Let's look at the steps.

First, you need to select the fragment of Processing code you want on your slide by making a selection of that text, right-clicking, and then choosing "Copy as HTML." (The code should now be stored in the clipboard.)

Next, open a code editor (such as Atom or Sublime Text) and paste the copied HTML. If you see something like the following image, go ahead and save it as an HTML file. (Something like code.html will work as a name.)

HTML code on Atom for macOS.

Now just drag your HTML file to Safari and the code should appear properly formatted in the browser. The last step is to select, copy, and paste the code from Safari into your Keynote slide. You should now have formatted Processing code into your Keynote slide. You can edit the font size or any other parameters in Keynote, but it's nice to get the colors and the font displayed directly as in Processing.

(So far, I've tested this workflow with Processing 3.0 and it works.)


  1. Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. There are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning and prototyping. (Processing.org) ↩︎

AUGUST 13, 2016

After learning how to implement vinkla/instagram and larabros/elogram into Laravel, I discovered that they are only needed if you want to interact with the private API of Instagram to authenticate a user and perform actions with its account (e.g. like images, post content, comment, etc.).

If you — like me — just want to obtain the URL of Instagram media, such as an image, you can do the following with public access (no authentication or access_token needed), which, compared to services like Flickr, offer a pretty low resolution of just 1080 by 1080 pixels.

So, let's get to it.

Instagram provides three different sizes of each image you upload, i.e., thumbnail, medium, and large. If you were to access those images of the following picture on Instagram, https://www.instagram.com/p/BI0BQkfh0ed, you would do as follows.

// Thumbnail
https://www.instagram.com/p/BI0BQkfh0ed/media/?size=t
// Medium
https://www.instagram.com/p/BI0BQkfh0ed/media/?size=m
// Large
https://www.instagram.com/p/BI0BQkfh0ed/media/?size=l

The last letters—t, m, and l—represent the size of the image. These links redirect to the Instagram media's URL of the size you specify, and can be used as the src attribute on an HTML img, or as a way to download Instagram pictures.

Want to see older publications? Visit the archive.

Listen to my Podcast.